How to Improve Your Virtual Experience -- Exploring the Obstacles of Mainstream VR
HHow to Improve Your Virtual Experience –Exploring the Obstacles of Mainstream VR
Von der Fakultät für Ingenieurwissenschaften,Abteilung Informatik und Angewandte Kognitionswissenschaftder Universität Duisburg-Essenzur Erlangung des akademischen GradesDoktor der Naturwissenschaften (Dr. rer. nat.)genehmigte kumulative DissertationvonAndrey KrekhovausUfa, RusslandGutachter: Prof. Dr. Jens KrügerGutachter: Prof. Dr. Maic MasuchTag der mündlichen Prüfung: 18.12.2019 a r X i v : . [ c s . H C ] S e p ndrey Krekhov How to Improve Your Virtual Experience – Exploring the Obstacles of Mainstream VR
Cumulative Dissertation, December 18, 2019Reviewers: Prof. Dr. Jens Krüger and Prof. Dr. Maic Masuch
University of Duisburg-Essen
High Performance Computing Group (HPC)
Lotharstr. 65, 47057 Duisburg, Germany ist of Publications [1]: S. Cmentowski,
A. Krekhov , and J. Krüger. „"I Packed my Bag and in It I Put...":A Taxonomy of Inventory Systems for Virtual Reality Games“. In:
Submission to the2020 CHI Conference on Human Factors in Computing Systems (under review). [2]:
A. Krekhov and K. Emmerich. „Player Locomotion in Virtual Reality Games“.In:
The Digital Gaming Handbook (accepted). [3]:
A. Krekhov , S. Cmentowski, A. Waschk, and J. Krüger. „Deadeye VisualizationRevisited: Investigation of Preattentiveness and Applicability in Virtual Environ-ments“. In:
IEEE Transactions on Visualization and Computer Graphics. [4]:
A. Krekhov , S. Cmentowski, K. Emmerich, and J. Krüger. „Beyond Human: Ani-mals As an Escape from Stereotype Avatars in Virtual Reality Games“. In:
Proceedingsof the Annual Symposium on Computer-Human Interaction in Play. [5]: S. Cmentowski,
A. Krekhov , and J. Krüger. „Outstanding: A Multi-PerspectiveTravel Approach for Virtual Reality Games“. In:
Proceedings of the Annual Symposiumon Computer-Human Interaction in Play. [6]: S. Cmentowski,
A. Krekhov , A. Müller, and J. Krüger. „Toward a Taxonomy ofInventory Systems for Virtual Reality Games“. In:
Extended Abstracts of the AnnualSymposium on Computer-Human Interaction in Play Companion Extended Abstracts. [7]:
A. Krekhov , S. Cmentowski, and J. Krüger. „The Illusion of Animal BodyOwnership and Its Potential for Virtual Reality Games“. In: [8]:
A. Krekhov , M. Michalski, and J. Krüger. „Integrating Visualization Literacy intoComputer Graphics Education Using the Example of Dear Data“. In:
Eurographics2019 - Education Papers. [9]: S. Cmentowski,
A. Krekhov , and J. Krüger. „Outstanding: A Perspective-Switching Technique for Covering Large Distances in VR Games“. In:
ExtendedAbstracts of the 2019 CHI Conference on Human Factors in Computing Systems. iii [10]:
A. Krekhov and J. Krüger. „Deadeye: A Novel Preattentive VisualizationTechnique Based on Dichoptic Presentation“. In:
IEEE Transactions on Visualizationand Computer Graphics.
Best Paper Award. [11]:
A. Krekhov , S. Cmentowski, K. Emmerich, M. Masuch, and J. Krüger. „GulliVR:A Walking-Oriented Technique for Navigation in Virtual Reality Games Based onVirtual Body Resizing“. In:
Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play.
Honorable Mention Award. [12]:
A. Krekhov , S. Cmentowski, and J. Krüger. „VR Animals: Surreal BodyOwnership in Virtual Reality Games“. In:
Proceedings of the 2018 Annual Symposiumon Computer-Human Interaction in Play Companion Extended Abstracts. [13]:
A. Krekhov , K. Emmerich, P. Bergmann, S. Cmentowski, and J. Krüger. „Self-Transforming Controllers for Virtual Reality First Person Shooters“. In:
Proceedingsof the Annual Symposium on Computer-Human Interaction in Play. [14]:
A. Krekhov , K. Emmerich, M. Babinski, and J. Krüger. „Gestures From the Pointof View of an Audience: Towards Anticipatable Interaction of Presenters With 3DContent“ In:
Proceedings of the 2017 CHI Conference on Human Factors in ComputingSystems. [15]:
A. Krekhov , J. Grüninger, K. Baum, D. McCann, and J. Krüger. „MorphableUI:A Hypergraph-Based Approach to Distributed Multimodal Interaction for RapidPrototyping and Changing Environments“. In:
Proceedings of The 24th InternationalConference in Central Europe on Computer Graphics, Visualization and Computer Vision2016. [16]: F. Daiber,
A. Krekhov , M. Speicher, J. Krüger, and A. Krüger. „A Frameworkfor Prototyping and Evaluation of Sensor-based Mobile Interaction with Stereoscopic3D“. In:
ACM ITS Workshop on Interactive Surfaces for Interaction with Stereoscopic3D. bstract
What is Virtual Reality (VR)? A professional tool, made to facilitate our everydaytasks? A conceptual mistake, accompanied by cybersickness and unsolved locomotionissues since the very beginning? Or just another source of entertainment that helpsus escape from our deteriorating world? The public and scientific opinions inthis respect are diverse. Furthermore, as researchers, we sometimes ask ourselveswhether our work in this area is really “worth it”, given the ambiguous prognosisregarding the future of VR. To tackle this question, we explore three different areasof VR research in this dissertation, namely locomotion, interaction, and perception.We begin our journey by structuring the VR locomotion landscape and by introducinga novel locomotion concept for large distance traveling via virtual body resizing.In the second part, we focus on our interaction possibilities in VR. We learn howto represent virtual objects via self-transforming controllers and how to store ouritems in VR inventories. We design comprehensive 3D gestures for the audienceand provide an I/O abstraction layer to facilitate the realization and usage of suchdiverse interaction modalities. The third part is dedicated to the exploration ofperceptual phenomena in VR. In contrast to locomotion and interaction, wherewe mainly deal with the shortcomings of VR, our contributions in the field ofperception emphasize the strong points of immersive setups. We utilize VR totransfer the illusion of virtual body ownership to nonhumanoid avatars and exploitthis phenomenon for novel gaming experiences with animals in the leading role. Asone of our most significant contributions, we demonstrate how to repurpose thedichoptic presentation capability of immersive setups for preattentive zero-overheadhighlighting in visualizations. We round off the dissertation by coming back to VRresearch in general, providing a critical assessment of our contributions and sharingour lessons learned along the way. v bstract - German Was ist die virtuelle Realität (VR)? Ein professionelles Werkzeug für die alltäglichenAufgaben? Ein konzeptioneller Fehler, begleitet von Cybersickness und von denbis heute ungelösten Problemen der Fortbewegung? Oder einfach nur eine weitereUnterhaltungsquelle, die uns die Flucht aus der unangenehmen Realität erleichtert?Die öffentliche und wissenschaftliche Meinung diesbezüglich ist gespalten. Undwir, als Forscher, fragen uns nicht selten, ob sich unsere Arbeit auf diesem Gebiettatsächlich als lohnend herausstellt, wenn man all die zweideutigen Prognosenhinsichtlich der Zukunftsfähigkeit von VR berücksichtigt. Um eine Antwort aufdiese Frage zu finden, betrachten wir in dieser Dissertation drei unterschiedlicheVR-Forschungsgebiete: Lokomotion, Interaktion und Perzeption. Wir beginnen un-sere Reise mit der Strukturierung der vorhandenen Fortbewegungsarten in VR undder Einführung eines neuen Konzepts zum Zurücklegen großer Distanzen durchvirtuelle Körpergrößenänderung. Im zweiten Teil der Arbeit fokussieren wir unsauf die Interaktionsmöglichkeiten in VR. Wir erlernen den Bau von transformier-baren Controllern zur Repräsentation von virtuellen Objekten und betrachten dasKonzept von Inventaren in VR. Ferner entwickeln wir publikumsorientierte Inter-aktionsgesten und führen eine Abstraktionsebene für I/O-Handling ein, um dieUmsetzung und Nutzung solch mannigfaltiger Interaktionsmöglichkeiten in VR zuerleichtern. Der dritte Teil der Arbeit widmet sich der Erkundung verschiedenerWahrnehmungsphänomene in VR. Im Gegensatz zu unserer Forschung auf den Ge-bieten der Lokomotion und Interaktion, die sich vor allem mit den Problemen undHerausforderungen in VR beschäftigte, konzentrieren wir uns hinsichtlich Perzeptionvor allem auf die starken Seiten von immersiven Setups. Wir verwenden VR, umdie Illusion des virtuellen Körperbesitzes auf nicht-humanoide Avatare zu über-tragen und nutzen dieses Phänomen, um neuartige Spielkonzepte mit Tieren inder Hauptrolle zu erschaffen. Als eine unserer Haupterrungenschaften zeigen wirferner auf, wie die dichoptische Präsentationsfähigkeit eines immersiven setups ohneZusatzaufwand für präattentives Highlighting im Bereich der Visualisierung einge-setzt werden kann. In einer abschließenden Diskussion führen wir eine kritischeBewertung unserer wissenschaftlichen Beiträge durch und teilen unsere währendder Promotion erworbenen Erfahrungen auf dem Gebiet der VR-Forschung. vii ontents ix .3 Concluding Comments on Perception . . . . . . . . . . . . . . . . . . 39 Bibliography Declaration x Introduction
Virtual Reality (VR) has taken foot in everyday life. Or did it not? On the onehand, the industry took critical leaps forward and established affordable living roomVR setups. Software companies are extending their products to VR and promiseunique benefits, be it for entertainment or business purposes. On the other hand,however, researchers are getting more and more vocal (cf. Figure 1.1) regarding theomnipresent obstacles of VR that have not changed that much in the last decades:limited locomotion, inferior interaction, or the risk of cybersickness, to name a few.
Fig. 1.1:
Mixed voices on Twitter regarding the present and future of virtual reality.
As prospective or even established researchers, we often question ourselves whichimpact our work in a certain area might have and whether it is “worth it”. So, is VRdoomed as some people claim, or are there good chances that our research effortswill be rewarded? That is exactly the central question that unites the publicationsgathered in this thesis. As the title already reveals, we dive into multiple VR entitiesto win an impression of whether and how research might change the status quo ofmainstream VR. The particular contributions range from fundamental techniques,such as novel locomotion approaches, to application domains, such as scientific visu-alization. We dedicate the introduction section to answering three major questionsof the reader:•
Why was the thesis written (and why should I read it)? • What is the thesis (not) about? • How is the manuscript organized? ote that this is a cumulative dissertation. Hence, in contrast to traditional manuscripts,the present synopsis is a high-level pointer to published works, rather than a verbosein-depth elaboration of a particular research topic. The main objectives of oursynopsis are to establish a red thread through the conducted research, to familiarizethe readers with the most important outcomes, and to draw conclusions related tothe present and future of virtual reality. One crucial argument among opponents of VR (research) is that the base approachof VR is not novel and has not evolved in the last decades. Nearly all “VR booms”and associated promises failed, and many of us see no reason to believe the oppositeregarding the current VR era. In the early nineties, we faced first commercial setups,such as the Sega VR, and magazines predicted “affordable VR by 1994” [Eng92].Now, more than twenty years later, VR is finally regarded as mainstream, arrivingin the price range of consoles and high-end smartphones. Yet the idea remains thesame: a stereo pair of images and head tracking form the basis for an experiencethat is well-known and being studied extensively for decades. So why is it worthreconsidering it?As always, the devil is in the details. While there was no major invention thatrevolutionized VR overnight, several subtle advances and incremental changes—both from a technical and psychological perspective—give us enough reason torevisit VR. We can aggregate such changes in the following categories:
When it comes to technology, the most remarkable progress was achieved regardingthe three following characteristics of VR setups: affordability , comfort , and features . Affordability.
In general, the price tag is not important in research. However, readilyavailable head-mounted displays (HMDs) created a joint base for VR experiments andincreased reproducibility. Thanks to default setups, such as the HTC Vive ecosystem,researchers began to share their testbed implementations and best practices. Thisinterchange enhanced the overall robustness and transferability of the results.
Comfort.
In the past, wearing a heavy and chunky HMD unavoidably introducedbias due to discomfort, which limited the potential of VR. Nowadays, HMDs have anoverall weight of around 500 grams and can be considered at least bearable for aprolonged period. More importantly, the show-stopping and dangerous cable clutterfinally disappeared, be it in case of all-in-one solutions, such as the Oculus Quest, oreven for desktop VR variants, such as the HTC Vive Pro with a wireless adapter. Chapter 1
Introduction eatures.
Affordable and comfortable HMDs compete on the market, and thiscompetition ensures that manufacturers have to add unique features to their products.Hence, users finally receive acceptable display resolutions and an increased—yetstill narrow—field of view. Moreover, VR setups now include sophisticated andprecise controllers to enhance the interaction with VR content. And, perhaps mostimportantly, room-scale VR paved its way into production, which allows us to utilizethe most realistic locomotion technique—natural walking.
The progress of VR is not only driven by hardware. Advances in computer visionwere a crucial step towards enabling room-scale setups, be it in case of desktop-based infrared tracking approaches (e.g., HTC Vive) or standalone optical trackingsolutions (e.g., Oculus Quest). On the developer (and researcher) side, the increas-ing spread of consumer-grade HMDs gave rise to the integration of VR hardwaresupport into popular frameworks and game engines, such as Unity 3D [Tec18]. Thisintegration, in turn, opens VR to a broader mass of potential developers, includingindie companies.More importantly, such a joint base facilitates the exchange of scientific results,testbed scenarios, assets, and tutorials. Hence, it is not surprising that the majorityof current VR publications rely on Unity 3D or similar engines due to the significantlyreduced turnaround time from idea to prototype. This significantly increased pacecompared to the past decades is another critical reason for revisiting that discipline.
VR is an experience designed for humans by humans. Hence, an important factorwe have to consider is the target audience. The digital behavior of people changedat a fast pace in recent years. By becoming more digital, our inhibition thresholdregarding VR technologies is now lower than ever. For instance, advanced interactionmetaphors, such as gestures, are already known from smartphones. Furthermore,3D cinemas and similar stereoscopic experiences increased our robustness regardingcybersickness.These behavioral changes potentially have a high impact on a set of VR researchareas. In particular, aspects such as locomotion, interaction, and perception arevital candidates to be affected by the change in our (digital) nature. Hence, thethesis places a particular emphasis on these three areas while leaving out some otherentities, such as audio.
Motives for Revisiting VR .2 Scope and Contextual Boundaries The virtual experience is based on several sensations. The general impression isformed by what we see ( vision ), touch ( somatosensation ), hear ( audition ), smell( olfaction ), and taste ( gustation ). Some of these aspects were affected more by the VRadvances mentioned above than others. For this reason—and to maintain focus—thethesis concentrates on vision and somatosensation and does not include research thattargets audition, olfaction, and gustation. Of course, this does not mean that theseareas are not worth further explorations. Particularly regarding olfaction [Yan+04]and gustation [Nar+11], we surely can expect significant advances soon. In themeantime, we take a closer look at the following research disciplines:
Locomotion.
The manuscript provides a high-level overview and classification of VRlocomotion techniques, be it stationary or walking-based. Moreover, we introduceand discuss two novel approaches based on virtual body resizing and natural walking:
GulliVR and
Outstanding . Interaction.
In this manuscript, we contribute the following advances to this verydiverse research area:•
Haptics - by introducing a self-transforming controller approach•
Inventory systems - by establishing a VR inventory taxonomy•
Gestures - by designing gestures that an audience can easily understand•
I/O abstraction - by developing a framework for arbitrary, multimodal I/O.
Perception.
VR setups impact the way how we see the (virtual) world and ourselves.We focus on two examples: First, we take a closer look at the illusion of virtual bodyownership —a phenomenon that allows us to perceive our virtual avatar as our ownbody. We extend this illusion to nonhumanoid representations, allowing users tomake novel experiences by embodying various creatures. Second, we dive into thearea of visualization and examine how the dichoptic presentation of a VR setup canbe “exploited” for preattentive highlighting . This dissertation consists of two parts: a synopsis and a collection of related pub-lications. The main goal of the synopsis is to establish a red thread through theincluded papers by grouping and summarizing the conducted research in a high-levelmanner. For each research topic, the reader obtains a brief motivation, summarizedcontribution, and an overview of the results. In particular, the insights exposedin the synopsis are not meant to be a standalone report and should be regardedas pointers to the actual manuscripts. For instance, the synopsis does not provideparticular p -values or source code to prevent a loss of focus. Chapter 1
Introduction ig. 1.2:
The main part of the synopsis and the respective publications at a glance. Note thatintermediate manuscripts, such as workshop contributions or works in progress,are not listed as standalone items to maintain clarity.
Structure of the Synopsis verall, the thesis covers multiple research objectives that are thematically organizedin three chapters— Locomotion , Interaction , and
Perception (cf. Figure 1.2). We startby taking a closer look at one of the most crucial aspects of VR, namely locomotion.After establishing a classification of the diverse movement techniques, we discusstwo novel approaches for traveling in virtual environments. Both approaches sharethe common idea of virtual body resizing and eye distance modifications to allownatural walking over large distances without cybersickness.In the second part, we examine four selected aspects of multimodal interaction.Similar to our walking-inspired contributions in the case of locomotion, we initiallyfocus on the most realistic way to interact with objects in VR. More precisely,we explore the benefits of physical proxies on the example of a self-transformingcontroller that mimics different virtual objects through a weight-shifting approach.Such interaction with multiple objects requires a useful way to store and representthese entities in VR. For this purpose, we construct a taxonomy for VR inventoriesand elaborate it on three example scenarios. Then, as a sidestep, we escape user-centered interaction research and consider the spectators instead. Our question inmind is whether and how it is possible to establish gesture-based interaction thatis comprehensible and predictable by an audience. Finally, after being confrontedby such a diversity of I/O modalities, we see how a hypergraph-based abstractionmodel can be used to hide this complexity from users and developers to foster theusability of multimodal I/O setups.The third part of the synopsis is dedicated to perceptional phenomenons that wecan achieve thanks to VR. In other words, we focus less on improving VR per seand instead seek for possibilities to benefit from the unique features of such setups.We familiarize ourselves with the so-called illusion of virtual body ownership andtransfer such feelings of embodiment onto nonhumanoid avatars. In particular, wesee how VR allows us to become an animal and to gain novel experiences in the roleof such creatures. Furthermore, we delve into the area of preattentive vision andlearn how VR technology can be utilized to highlight objects in visualization tasks.The synopsis is rounded off by a general discussion about the conducted research.We look critically at the overall contribution of this dissertation and express our finalthoughts regarding the possible impact of our results on the status quo of virtualexperiences. Chapter 1
Introduction
Locomotion
Virtual environments evoke our sense of adventure and our urge for exploration.Hence, locomotion ever since had an important role in VR research. Consequently,we dedicate the first chapter to answer the question of how to get from point A topoint B in a virtual environment. More precisely, we focus on VR games as a directlyaffected application area, as it allows us to initiate a more in-depth discussion.Hence, we replace the generic “user” by the more fitting term “player”. Naturally,the presented research is also applicable to other VR domains with no or minorrestrictions.In particular, this chapter outlines three contributions: an overview of the currentVR locomotion landscape [KE] and two novel approaches that emphasize naturalwalking as the most natural way of movement. Accordingly, we begin by drawinga big picture of locomotion techniques to provide orientation and to establish afoothold on basic concepts, such as presence and cybersickness. Hereafter, wediscuss the idea of virtual body resizing that lays the foundations for the two noveltechniques
GulliVR [Kre+18] and
Outstanding [CKK19a; CKK19b].
Before delving into locomotion and VR research in general, we need to revisit certaincore mechanics of such setups. In addition to providing stereo images, VR headsetsalso track the head orientation, which allows us to look around in VR like in thereal world. This impression is often described as a feeling of “being there” [Hee92;LD97]. Other popular wordings include the two terms presence and immersion —sometimes in an interchangeable manner. Throughout the dissertation, we utilizeimmersion [CCN14] when focusing on the technical quality of VR hardware [BD95;SC02]. In contrast, we use presence to describe how immersive setups affect ourperception [Sla03]. Researchers often target the increase of presence when comingup with novel VR approaches, and VR locomotion is a prominent advocate for suchpresence-enhancing research [SUS95]. The most common methods to measurepresence include the Presence Questionnaire (PQ) [WJS05] and the Igroup PresenceQuestionnaire (IPQ) [Sch03; SRF18]. Also, the Immersive Tendencies Questionnaire(ITQ) [WS98] can be administered before the actual survey to check participants’tendencies to get immersed in an activity or fiction.The feeling of presence imposes high demands on VR software and hardware. Minorfaults or technical issues, such as a slightly offset locomotion or a sudden drop in rame rate, can have severe consequences regarding the players’ well-being andresult in cybersickness [SKD97; LaV00]. Other prominent terms often used in thiscontext are simulator sickness [Kol95] and motion sickness [Mon70; HR92; Ohy+07].In contrast to the manifold reasons of cybersickness, the main cause of simulatorsickness is an incorrectly adjusted simulator [Ken+89], i.e., a rather technicalproblem.Cybersickness involves symptoms such as nausea, eye strain, and headaches. Themain reason behind that negative phenomenon is a mismatch in our vestibulo-ocularsystem: Our vestibular system senses acceleration that ideally matches the visualinput. When these signals do not match, the symptoms mentioned above are likelyto occur [RB75]. Three popular explanations [LaV00] exist for such an undesiredbody reaction: poison theory, postural instability theory, and—the most prominent—conflict theory. Hettinger et al. [Het+90] also mentioned vection as a possiblereason behind cybersickness. Vection describes the feeling of movement that reliesonly on our visual system and occurs when we, e.g., sit in a standing train andobserve another train that is currently accelerating. Vection is influenced by severalfactors, including the field of view (FOV), the alignment and proximity of movingobjects, and the optical flow rate. For instance, the combination of a large FOV andfast-moving objects that account for a large proportion of players’ view amplify theperceived vection and smooth the way to cybersickness. Hence, limiting the FOV isone of the possible approaches to reduce cybersickness [FF16; Lin+02].Regarding VR locomotion, we recommend keeping cybersickness in mind and avoid-ing cognitive mismatches where possible. However, this guideline should not bepursued at all costs: “sickness-save”, stationary scenes without any locomotion willpotentially miss out on a wide range of benefits that VR has to offer. Moreover, asshown by von Mammen et al. [VKE16], games might even benefit from an artificiallyinduced cybersickness in some instances. In recent years, research [Bol17] and the game industry [Hab+17] created a plethoraof varying locomotion approaches. As often the case, there is no “right” answerwhich one to pick for an upcoming VR game. Hence, the main purpose of ourresearch article on player locomotion [KE] was to analyze the broad spectrum ofpossibilities to move in VR and to serve as a starting ground for further research and(game) development. The summarized approaches range from stationary methodsthat rely on gamepads to advanced redirected walking techniques that trick ourperception into enabling unrestricted natural walking in a limited space.Figure 2.1 outlines the proposed division in stationary and walking-oriented ap-proaches. Stationary techniques, such as gamepad or gesture-based controls, do Chapter 2
Locomotion ig. 2.1:
An overview of VR locomotion techniques including their benefits and drawbacks.Green color means that this attribute is a known advantage of the technique,yellow stands for limited value, and red is rather a disadvantage. not involve a physical movement of the player. Hence, the main reasons for choos-ing such approaches are the lower hardware requirements, as we do not have totrack the absolute player position, and the reduced fatigue, as players can remainseated during a gaming session. On the downside, standing still while moving inVR usually comes with the risk of cybersickness due to the cognitive mismatch.Therefore, previous research [Med+16; Yao+14] suggests short, fast movementswith no acceleration as one measure to combat cybersickness in such cases.
Overview of the VR Locomotion Landscape n contrast to stationary techniques, most of the approaches inspired by naturalwalking are intuitive and robust concerning cybersickness. A significant amountof research has confirmed the resulting superior realism of physical walking in VRand its positive effect on the perceived presence [SUS95; Uso+99; RL09; WH13;RL06]. Unfortunately, unrestricted natural walking is hardly achievable, as thephysical room size imposes an insuperable obstacle. Hence, approaches falling in thewalking-inspired category attempt to overcome this limitation in various ways—be itphysical treadmills or redirected walking.Apart from the outlined classification, our work contains a set of design implicationsto facilitate the decision-making process on locomotion in research and (game)development. We base our suggestions on the following three questions:• What is our target audience? • How important is exploration? • What else happens during locomotion?
A general response to these questions can be extracted from Figure 2.1. For a moredetailed discussion of each aspect, we invite the reader to revisit the full publicationwhere we provide the corresponding design considerations.
Our research contributes to the field of walking-inspired techniques. We believethat the significant advantages of natural walking, such as the mentioned increasein presence [SUS95; Uso+99; RL09; WH13; RL06] or the improvement of ourcognitive map [RVB11], are worth our research efforts to deal with the fundamentalproblem of physical space limitations.In the broader sense, our approaches can be classified as a multiscale virtual environ-ment navigation [Lam+09; AM16; ZF02; Kop+06]. Common locomotion methodsare based on a 1:1 ratio between the physical and the real world, i.e., the traveleddistance in the physical room roughly corresponds our virtual footprint. In contrast,we fit the virtual environment entirely into our living room by rescaling the player.In the remainder of this chapter, we briefly outline the core idea of virtual bodyresizing and how it manifests in our two locomotion approaches. Finally, we sharesome ideas about the general applicability of our work and its overall impact on theVR locomotion landscape.
Our core idea is to enlarge the virtual body of the player on demand. Such atransformation allows the player to travel vast distances in a room-scale environmentusing natural walking. We can obtain the same results by shrinking the size of the Chapter 2
Locomotion ig. 2.2:
The increased eye distance results in a larger vergence angle, altering the size/dis-tance perception of objects and evoking the feeling of being a giant. Physicalmovement is perfectly aligned with visual feedback, which obviates cybersickness. world instead of enlarging the player. However, we would not recommend thatapproach in practice for performance reasons. For instance, having a fully resizableenvironment usually interferes with baked lighting and also favors floating pointprecision errors.At first glance, the idea is similar to a common flying approach, as both variantsshare the same camera position and velocity. However, flying is known for its severecybersickness due to the cognitive mismatch of physical and virtual movementspeeds. In contrast, enlarging the player comes with an increase of the interpupillarydistance, as depicted in Figure 2.2. Although tiny variations of the interpupillarydistance have been shown to have no measurable impact on size judgments [Bes96]and can even be applied unnoticed [WGP98], setting the modeled eye separation toa significantly different value compared to the physical eye distance results in so-called false eye separation [CLW14]. The resulting perceived image causes an alteredsize perception compared to real objects [WHR99], leading to a miniature worldperspective. Renner et al. [Ren+15] and van der Hoort et al. [HGE11] also reportedsimilar findings, confirming that increasing the stereo base (i.e., the modeled eyedistance) makes objects appear nearer and smaller. This altered perception allows usto minimize the risk of cybersickness, as players feel like they are walking as giants(no mismatch) and not like they are artificially floating or flying.
The
GulliVR navigation metaphor [Kre+18] utilizes virtual body resizing to transformplayers into giants on demand. As giants, players can travel over large distanceswithin a few steps, and switch to normal size once the destination is reached (cf.
Novel Approaches: GulliVR and Outstanding ig. 2.3: GulliVR: players turn into giants and traverse large distances without cybersicknessdue to the adjusted interpupillary distance.
Figure 2.3). Although the underlying mechanism is straightforward, we have toconsider several degrees of freedom for proper functioning. First and foremost, weneed to decide how fast the transition between the two scales should be. By meansof a pretest, we found a sweet spot at t = 0 . · Scale giant . As an example, ourtestbed scenario utilized × as a scaling factor, resulting in a transition of half asecond. Slower transition times bear the risk of cybersickness, while faster or instanttransitions favor player disorientation.The main reason for us to emphasize natural walking was the expected increasein presence. Hence, we conducted a user study that compared GulliVR with theestablished point and teleport locomotion approach [Boz+16b]. Our results con-firmed our hypothesis regarding presence. Furthermore, players in the
GulliVR groupwalked significantly more and did not show any signs of cybersickness.In addition to these fundamental advantages, our work established different possi-bilities for leaving the giant mode. In particular, we discussed the unguided, vanillaversion, a crosshairs-based extension, and an automated pulling system, as depictedin Figure 2.4. The latter option is what we utilized in our testbed game: leaving thegiant mode pulled the player discreetly toward the nearest point of interest to avoidunnecessary readjustments and frustration.To summarize,
GulliVR successfully introduced the core approach of giant-like trav-eling in VR games. The idea was later picked up by Abtahi et al. [Abt+19] andCmentowski et al. [CKK19a; CKK19b]. In particular, these works confirm our findingthat the increase of the interpupillary distance in giant mode prevents cybersickness,which is an essential requirement for the applicability of such locomotion methods. Chapter 2
Locomotion ig. 2.4:
Two extensions of GulliVR to enable a precise transition from giant mode back tonormal mode. Pulling adds a horizontal translation toward the nearby point ofinterest, whereas aiming displays a crosshair to indicate the destination location.
In contrast to
GulliVR , which relies on a consistent first-person view, the
Outstanding technique [CKK19a; CKK19b] connects the virtual body resizing to a dynamic switch-ing between first-person and third-person perspectives. In the unscaled state, playersget the typical first-person view to perform short-range exploration by physicalwalking, detailed observation of local points-of-interest, and basic interactions, suchas picking up objects. For large-distance traveling, we once again perform the virtualbody resizing. This time, however, the avatar is left behind, and players obtain athird-person bird’s eye perspective. We display the virtual avatar at the players’feet to symbolize their original first-person position in the world. This view allowsthe disembodied players to observe the surrounding area from an elevated viewand to command their avatar by setting navigation targets using raycast aiming (cf.Figure 2.5).Each perspective has benefits and drawbacks [STV06; Gor+17], i.e., the third-person mode is excellent for environmental perception while the first-person viewis superior regarding interaction-intensive tasks. The introduced dynamic perspec-tive switching combines the strengths of both views and achieves uncomplicated,overview-enhancing traveling with interactive, local exploration on demand.Similar to
GulliVR , the transition between the two modes—and even perspectives inthis case—required additional iterations and prestudies. In our work, we proposed afast, dolly-shot-like animation to improve the impression of embodying or disem-bodying the avatar. We also added a translation backward to achieve a comfortable45° viewing angle after switching to the third-person mode and included a curved
Novel Approaches: GulliVR and Outstanding ig. 2.5: Outstanding: players can switch to a bird’s eye third-person perspective and controltheir avatar via raycast aiming. The picture-in-picture shows the optimal transitionparameters to convey the feeling of embodying or disembodying the avatar. animation between both states. This curve emphasizes horizontal disembodimentfollowed by a steeper vertical growth, as shown in Figure 2.5.To validate our technique, we compared
Outstanding to the point and teleportlocomotion approach. The results of our study confirmed a significant increasein spatial orientation while maintaining high levels of presence, competence, andenjoyment. Our experiments showed that players generally liked the idea of thedynamic switching between different perspectives and that they were able to usethe technique without significant problems. Additionally, the work summarizedthe critical insights into a set of comprehensive design guidelines and establishedtechnical extensions, such as a catching-up mechanism to skip ahead and close thegap between the player and the avatar without switching perspectives.
Apart from a classification of the current locomotion landscape, this chapter intro-duced two novel locomotion techniques that rely on virtual body resizing. The coreidea was the enlargement of players on demand to conquer large distances withina few steps. Our main motivation was to emphasize natural walking as the mostintuitive locomotion method, and indeed, according to our results, we succeeded onmultiple fronts:• the adjustment of the interpupillary distance prevented cybersickness • the view from above increased the spatial orientation • the approaches had a positive impact on presence • players walked more , enjoying the advantages of physical movement [RL09]. Chapter 2
Locomotion owever, such contributions always require critical analysis. Neither
Outstanding nor
GulliVR remove the room-scale limitation. Even if we can travel much largerdistances thanks to rescaling, we will eventually end up in front of a physicalwall. For instance, in the case of
GulliVR , players have to go back to normal scale,walk away from the wall, and re-enter the giant mode to continue traveling. Forthis reason, we position our approaches as additional locomotion methods andnot as standalone solutions. More precisely, we recommend a combination withstationary approaches, such as teleportation, to facilitate such resetting and fostershort-distance exploration.As player resizing comes with specific strengths and weaknesses, we have to embedsuch techniques carefully in the underlying scenarios. In our testbed scenarios,players encountered a medieval, fantasy-inspired setting, where being a giant poten-tially fits the narrative. In contrast, we discourage the application of such methodsin games with dominant indoor scenery: narrow, multilevel closed environmentsdiminish the advantages mentioned above, such as the increase of spatial orientation,and further complicate targeting when switching back to a regular scale.In addition to such scenery-related decisions, we recommend to consider the paceof the game: dynamic first-person shooters are less likely to benefit from suchexploration-oriented techniques than slow-paced role-playing games or 3D adven-tures relying on big open worlds. However, note that players could easily get spoiledfrom seeing too far ahead, which requires additional techniques like the fog of war.An important lesson that we learned from our evaluations is that players desire addi-tional interactions while traveling as giants. For instance, in the case of
Outstanding ,players have to wait for their avatar to reach the destination, and just looking aroundquickly gets annoying. Digital games keep players engaged by introducing newevents regularly. Similarly, we propose to include additional incidents that forceplayers to switch back to regular size or to add specific actions as a giant. Thelatter option was highly popular in the
GulliVR study, and players enjoyed theirsuperhuman abilities while picking up and moving large objects.To summarize, our research has shown how virtual body resizing can be used todesign locomotion techniques with unique advantages, such as increased spatialorientation and fast travel without cybersickness. We suggest that rethinking playermovement from scratch, as we did with
Outstanding and
GulliVR , is a more promisingway rather than porting locomotion 1:1 from non-VR games. “VR-first” approachesallow us to take full advantage of the benefits that VR setups have to offer, and webelieve that our and similar research efforts can pave the way to even more engagingand fascinating player experiences and game mechanics in VR.
Concluding Thoughts on Resized Walking Interaction
In the previous chapter, we mainly dealt with the issue of getting from point A topoint B. However, locomotion is just a basic need in VR. To create an engaging userexperience and fully benefit from immersive setups, we need intuitive, user-tailoredinteraction solutions that amplify our feeling of being in the virtual world.This chapter covers four subareas of interaction and explores ways to enhancethe scope of user actions in VR. Throughout our related research, our primarymotivation was to make user interaction in VR as realistic and intuitive as possible.In particular, we achieved these goals by developing physical proxies [Kre+17b],rethinking virtual item representations [Cme+19; CKKew], designing comprehensivegestures [Kre+17a], and providing a meaningful I/O abstraction layer [Dai+13;Kre+16] to bring all these components together into a coherent VR ecosystem.We start with object manipulation in VR and the possibilities to realize haptic feed-back for such interactions. More precisely, the first section focuses on the design ofinput controllers that can accommodate multiple virtual objects via shapeshifting.
One easy way to break presence in VR is to touch a virtual object without getting aproper tactile feedback [SVG15]. Hence, input controllers play an essential role inour virtual experience [Jen+08; YKM16; BM13]. In this section, we consider thedesign of such controllers on the example of first-person shooters [Zar+14]. We mayassume that a well-crafted controller that feels like its virtual counterpart [Bro+15],i.e., a gun, can increase the feeling of presence. For instance, the Vive controllerresembles a gun handle with a trigger, while other devices mimic the shape of two-handed guns to provide players more realistic haptics for games with such weapontypes. Note that the actual appeal of such devices is not relevant, as players do notsee the controller while being in VR.Unfortunately, such devices can not reflect the in-game weapon switching, whichis a crucial element of most shooter games. Relying on a one-handed, pistol-likecontroller for a game where the current weapon is a two-handed rifle is less realisticthan a two-handed input device, and vice versa. For this issue, our research proposeda self-transforming controller [Kre+17b], as depicted in Figure 3.1. Our controlleradjusts its shape and handling according to its game representation, i.e., the devicefeels and behaves similarly to a pistol in the first state, and similarly to a rifle ig. 3.1: Our self-transforming controller simulating a two-handed laser rifle (left) and aone-handed blaster (right). The transformation is triggered by a button at thebottom of the handle and works based on a motor and two telescopic tubes. Abuilt-in Vive controller is used for tracking. in the second state. The transformation is done via a telescopic, motor-drivenmechanism.Our research contributes to the body of literature related to shapeshifting inputdevices [Mic+04; Ste+13; Nak+16; NYI13; IOO16; KIH15]. Similar to Zenneret al. [ZK17], we rely on weight shifts in the device to modify the rotational resis-tance and the perceived inertness. Our design thinking workshop and subsequenteveluations revealed the following insights:• controllers should be lighter than real guns (< 0.8 kg)• controllers should be shorter than real guns (guns: < 25 cm, rifles: <40 cm)• subtle weight shifts suffice (in our case, a 1:1.7 ratio of gun to rifle).Our results confirmed that such an adaptive approach outperforms default VRcontrollers concerning scales such as appearance, efficiency, authenticity, experiencedrealism, and flow. Most importantly, 96 % of the participants reported liking theoverall concept of a self-transforming controller.Based on this positive feedback, we suggest applying the same design approachto other weapons or virtual objects. For instance, the inclusion of a second rackthat transports an equal weight toward the player allows simulating shoulder-firedmissile weapons, which allows us to cover most of the weapon classes for first-person shooters. This kind of multifunctional physical proxies is also beneficial forother game genres, such as role-playing games or 3D adventures. In this regard, aretractable handle would enhance the design space with sword-shaped virtual objects,and simulate a switch between, e.g., a short knife and a two-handed sword.
Transformable controllers that represent multiple virtual objects are a significantadvancement toward a more realistic interaction in VR applications and games inparticular. Being equipped with multiple virtual objects requires efficient ways to Chapter 3
Interaction ig. 3.2:
Our taxonomy of inventory systems in virtual environments. This figure is readfrom top to bottom, starting with the requirements that we should take intoaccount before and while designing inventories. The considerations are used toselect design choices for each building block at the bottom. manage such items. Storage interfaces, best known as inventories [Weg+17], areamong the most commonly used features in nearly every game genre. Although car-rying multiple items appears like a natural addition to interaction-oriented gameplay,most VR developers refrain from using inventories at all. For this reason, the goal ofour research [Cme+19; CKKew] was to understand the peculiarities of VR inventorydesign. We applied the following scientific methods to achieve this objective:• in-depth developer interviews to determine real-world challenges and painpoints related to inventory design in VR• literature reviews to sublime best practices on VR menus in general• games analysis via grounded theory approach [CB07; GSS68] to identify theessential building blocks and characteristics of VR inventories.
Inventory Systems: Organizing Objects in VR ig. 3.3: Three example realizations based on our taxonomy.
Flat grid: a 2D overlay andvirtual raycasts are used to achieve fast and straightforward item management.
Virtual drawers: items that are inserted using physical actions and scaled to equalsizes.
Magnetic surface: free item placement allows for a precise positioning;items stick to the surface until removed.
The resulting taxonomy is depicted in Figure 3.2. Also, our publication exposes aset of design implications and demonstrates the practical use of our taxonomy inaction. More precisely, the work introduces and evaluates three manifestations of VRinventories: a flat grid, virtual drawers, and a magnetic surface (cf. Figure 3.3).The outlined work contributes to the area of VR interaction by decomposing theinventory design process into requirements and building blocks. The resulting tax-onomy can be used to facilitate and improve the decision-making of researchersand game developers. There is a need for further evaluations of particular inven-tories. We assume that such additional studies will allow our community to createa big picture of the interplay between the individual building blocks. This under-standing, in turn, will further improve the status quo of user interaction in virtualenvironments.
So far, we have seen how to realize direct object manipulation with transformablephysical proxies and how to manage virtual objects in virtual inventories. As usuallythe case in human-computer interaction, we explicitly focused on the user. Thissection breaks with this tradition and focuses on the observers, i.e., the audienceinstead. More precisely, the outlined work [Kre+17a] explores whether and howgesture-based interaction can be designed to be understandable by someone whostands and watches aside.Why is it important to understand our interaction? Firstly, acting or playing in VR isoften an experience that we can not easily share due to hardware requirements. Inthis case, our audience can benefit from clear, unambiguous gestures, which make iteasy to follow and understand our actions [Hes+12; GJM11]. Secondly, audience-friendly and anticipatable gestures can be helpful during presentations [Hai+16; Chapter 3
Interaction ig. 3.4:
In the Wizard of Oz experiment, participants were asked to perform body move-ments and complete a set of tasks, such as moving (the camera) forward. Thewizard monitored the participants via the Kinect camera and triggered the corre-sponding behavior within the target application.
CJC16; Cuc+12; Tan+10], as they facilitate the process of message transporta-tion [HEP07; Xia05; Bat+13].We relied on a Wizard of Oz experiment [Alt+04; DJA93; MGM93; SC93] withrepresentatives from a typical audience to design multiple sets of anticipatablegestures (cf. Figure 3.4). By filtering and clustering the results, we established thefollowing three distinct sets: bimanual gestures, one-armed gestures, and full-bodymovements, as shown in Figure 3.5.An online evaluation revealed that two of the three gesture sets were indeed an-ticipatable, which supports our suggested approach of gesture crowdsourcing andemphasizes the benefits of gestures for comprehensible 3D presentations. In particu-lar, we determined that gestures that rely on weight shifting perform very robust interms of possible misinterpretations. Ambiguous configurations, i.e., where differentarms lead to different interactions despite executing the same gesture, are ratherhard to grasp and should be avoided.For future research and applications, we recommend the following guidelines foranticipatable gestures:• keep gestures as distinct as possible• use full body movements (e.g., weight shift) rather than minimalistic gestures• provide analog behavior for both arms.
Fig. 3.5:
Gestures for two example tasks. The green set mostly contains bimanual gestures.The blue set adds weight transfer. The red set consists of minimalist, one-armedgestures, and is the only set that was not predictable by the audience in our study.
3D Gestures: Interaction for the Audience uch gestures depend on the target application and the environment (e.g., the givenVR setup). In our case, we considered the example of a digital planetarium. Hence,the gestures might need an adaptation for a different use-case. Furthermore, wesuggest to examine the learning curve of an audience in a future study, i.e., theaspect of how easily a gesture can be learned might play an even more importantrole than the initial anticipation. We also encourage similar evaluations for VRgames, especially in combination with live streaming video platforms where playeractions in VR need to be understood by thousands of viewers. Sophisticated gestures and (transformable) controllers are only a few examples ofinteraction modalities that are possible in VR. Speech input, mobile devices, or gaze-based controls—we have a plethora of contemporary I/O modalities and devices atour disposal. This freedom comes at a cost, as the engineering workload involvedin making applications fully adaptable in this sense is very high. Hence, we needa more efficient way to establish connections between applications and interactiondevices rather than to integrate each device individually. A common approach toprovide such interconnectivity is the introduction of abstraction layers [OBL07;JKN02; Sch+11; CW12; VGH12; KRR10] that decouple application logic from I/Ohandling. This is especially relevant for VR [HGR03; Tay+01; Bau+01], wherenovel devices are introduced at a high rate.This section provides a pointer to
MorphableUI [Dai+13; Kre+16]—a multimodalsystem that transports interaction data between devices and applications. Ourapproach is based on a taxonomization [Kin+12; RLL11; DH06; CMR90; MCR90]of event types, application requirements, and device capabilities. In short, eachapplication relies on our API to state its requirements, and each device exposes itscapabilities that describe the type of generated or processed interaction events (cf.Figure 3.6).
Fig. 3.6:
The introduced operators allow substituting complex application requirements bycombining multiple devices. For instance, rotating the dataset can be achieved bya combination of the d-pad of a gamepad and the stick rotation of a joystick. Userschoose between such wiring possibilities via a mobile configuration frontend. Chapter 3
Interaction ig. 3.7:
An except from the underlying hypergraph model. Event types are captured asvertices, whereas hyperedges represent the operators, i.e., split, merge, and cast.An iterative algorithm starts at device vertices and traverses the hypergraph untilit reaches the requirement vertex. The red lines highlight an example of wiringbetween a 3D rotation requirement and two merged devices.
A token-based hypergraph algorithm is applied to generate iterative solutions thatfulfill the application requirements given the currently available devices. As depictedin Figure 3.7, our algorithm allows us to merge, split, and cast device capabilities,e.g., it is possible to combine a 2D swiping input on our phone with two arrow keyson a keyboard to trigger a 3D motion event inside the application. During runtime,our distributed implementation handles the event propagation and transformationvia local and remote networks.Our method removes the complexity of I/O handling from the application andallows the integration of novel devices without any modifications to the applicationlogic. Also,
MorphableUI introduces a visual configuration tool (cf. Figure 3.6)that allows users to select and adjust their preferred interaction methods. We canreconfigure these user-tailored interfaces [DF01; GW04; GWW07] during runtime,which emphasizes personalization and, in combination with device exchangeability,brings an additional benefit to rapid prototyping scenarios.Although our work includes validation through a developer survey, we suggestconducting a detailed study that explores the benefits and drawbacks of such user-tailored interfaces. While some of us enjoy tweaking and personalizing the controlsof an application, other users will find themselves overburdened by this additionalcomplexity. Hence, as a next step, we strongly recommend an automated interfacegeneration routine that learns from user behavior and provides a starting point forfine-grained configurations.
MorphableUI: I/O Handling in VR Applications .5 Applicability of Our Achievements Outside VR This chapter was an excursus into the world of VR interaction techniques. Weexplored several ways to enrich our virtual stay with meaningful object manipulationsand natural user interfaces. In particular, we have discussed physical proxies, virtualinventory systems, anticipatable gestures, and I/O abstraction layers. Similarly tothe previous locomotion chapter, we dedicate this discussion section to a criticalanalysis of the outlined contributions.More precisely, this section reflects upon the applicability of our work beyond virtualenvironments. In contrast to locomotion, most of our interaction approaches aretransferrable to non-VR applications. Even more, certain ideas, such as audiencegestures, were inspired by non-VR challenges, which further underpins the generalis-ability of such solutions. Hence, to make use of the full potential of our publications,we summarize their value outside VR in the following paragraphs:
Self-transforming controller.
The straightforward applications are non-VR first-person shooters. From a technical point of view, there is nothing that preventsus from using the controller in a conventional digital game. However, we predicttwo potential challenges that we need to tackle. Firstly, our prototype purposefullyignores the overall appeal because players do not see the real controller while beingin VR. Hence, the—now visible—difference in appearance between the proxy and itsvirtual counterpart might lead to a reduced increase in player experience. Secondly,the same circumstance could require a more extensive shift in weight that alignswith the virtual transformation.
Inventory Systems.
Despite being an established game feature, we are not awareof any research that targets inventories explicitly. Hence, we suggest reconsideringour taxonomy for non-VR games. Even if we need to adjust the individual buildingblocks due to altered requirements, our manuscript is still a valuable starting pointfor further investigations. Ultimately, we suggest the creation of a general taxonomythat covers all modalities, including mixed and augmented realities.
Comprehensible Gestures.
In a subsequent collaboration, we utilized our gesturesets in a digital planetarium. The presenter interactively navigates through theuniverse while giving insights into astrophysics and answering live questions. Ourgesture sets greatly facilitate such communication. Hence, our crowdsourced ges-tures have a high potential for many 3D content presentation contexts outside VRthat emphasize clear and anticipatable interaction on the part of the moderator.
MorphableUI.
Many I/O abstraction layers, such as
VRPN [Tay+01], were cre-ated with VR in mind due to the rapidly changing landscape of available devices.In contrast, our hypergraph approach allows for applicability beyond VR-specificsetups. The transformation of interaction events facilitates the usage of non-VR Chapter 3
Interaction evices, such as mobile phones, in VR scenarios, and vice versa. Furthermore, theprovided device exchangeability can be used not only for rapid prototyping and user-tailored UIs but also for interactions in changing environments that require dynamic,context-dependent interfaces. On the other hand,
MorphableUI introduces additionaloverhead in complexity, both for users and developers. Hence, we recommend tocarefully weigh up risks and opportunities before integration, because sophisticateddesktop software (e.g., Photoshop) with hundreds of different tasks would mostlikely not benefit from dynamic UI generation.To conclude, all of the outlined contributions have significant potential outside VR.Indeed, the conducted research needs to be adjusted (e.g., controller) or extended(e.g., inventories). Nevertheless, our essential observation is that VR-motivatedresearch can remain relevant in human-computer interaction, even if the hypesurrounding VR will not prove right.
Applicability of Our Achievements Outside VR Perception
In contrast to locomotion and interaction, this chapter deals with a more subtle, yetnot less important aspect of VR, namely perception. While the outlined research ofthe previous chapters was dedicated to improving the status quo of VR, our work on(visual) perception focuses on the question of how users can benefit from the strongpoints of VR setups. In other words, the following sections explore the advantagesresulting from the altered perception in immersive setups. In particular, we take acloser look at two research areas related to perception: (nonhumanoid) body owner-ship [KCK18; KCK19; Kre+19b] and visualization [KMK19; KK19; Kre+19a].
Most VR applications represent our alter ego by using a virtual avatar. Given thatVR setups offer a high degree of immersion, the bond to this virtual body is usuallymuch stronger than in common desktop scenarios. Put more simply, VR allows us toperceive the virtual representation as our own body. This perceptional phenomenonis known as the illusion of virtual body ownership (IVBO) [Sla+10], and previousresearch agrees that VR is an effective medium to induce such experiences [Sla+09;Wal+18].However, the investigated scenarios were limited mostly to human avatars. Ourresearch [KCK18; KCK19; Kre+19b], in contrast, focused on nonhumanoid represen-tations, as we assume that this kind of unusual experience bears significant potentialfor several application areas, such as entertainment or education. For instance,introducing exotic avatars beyond stereotypic knights and wizards is a viable optionto create refreshing and engaging player experiences in VR games. In an educationaldocumentary, the usage of animal avatars could help us to understand the behaviorof a particular creature better. Hence, the research on nonhumanoid IVBO outlinedin this section is driven by the following motivational question:
Is IVBO applicable tononhumanoid avatars, and, if so, what potential does that phenomenon have for VRapplications?
To facilitate the comprehension of our nonhumanoid research, we start by a briefintroduction of IVBO [LLL15]. This phenomenon originates in the effect of bodyownership and the experiments on the so-called rubber hand illusion [BC98]: theparticipant’s arm is hidden and replaced by an artificial rubber limb, and stroking oth the real and virtual arms creates the illusion of actually owning that artificiallimb. After further studies [TH05], researchers proposed a number of models [Tsa10;Ehr07; PE08; Len+07] to explain the interplay between external stimuli and ourinternal body perception. In particular, prior work concludes that, apart fromvisuotactile and sensorimotor cues [Sla+10; San+10], the IVBO effect is mainlyimpacted by visuoproprioceptive cues (perspective, body continuity, posture andalignment, appearance, and realism) [Sla+09; Sla+10; PSS12; MS13].The IVBO effect in virtual environments [Sla+08; BGS13] was mainly explored withanthropomorphic characters and realistic representations [LLL15; LJ16; Jo+17].For instance, regarding avatar customization in games, Waltemate et al. [Wal+18]showed that customizable representations lead to significantly higher IVBO ef-fects. A strong IVBO can produce various changes in (player) behavior [Jun+18;MVJ17], resembling the Proteus Effect by Yee et al. [YB07]. For instance, studiesrevealed a significant reduction in racial bias when players embody a black charac-ter [Pec+13]. Other examples include childish emotions arising from embodyingchild bodies [BGS13] or feeling more stable when having a robotic avatar [Lug+16].Hence, prior work demonstrates that IVBO can be applied to evoke specific feel-ings and attributes [Kor+16]. Similarly, we assume that a strong bond to ananimal caused by IVBO can also increase our involvement with environmentalissues [Ahn+16; Ber07] and our empathy for animals [TS05].Researchers have also expressed interest in studying IVBO beyond human morphol-ogy. For instance, Riva et al. [RWM14] posed the following question: But what if,instead of simply extending our morphology, a person could become something else—abat perhaps or an animal so far removed from the human that it does not even havethe same kind of skeleton—an invertebrate, like a lobster?
If we consider exotic bodycompositions, as in the case of a lobster that has few properties in common withour human body, the idea of sensory substitution [BK03] might play an importantrole. Related to VR games, we could also consider such substitution mechanismsas playful interactions: e.g., we could replace the echolocation feature of a bat bytactile feedback in a VR game. Given the extreme diversity of real and fictionalcreatures, it is difficult or even impossible to research IVBO for virtual animals asa whole. Instead, previous research tackled isolated modifications of body parts.For instance, Kilteni et al. [Kil+12] were able to stretch the virtual arm up to fourtimes its original length without losing IVBO. Normand et al. [Nor+11] used IVBOto induce the feeling of owning a more massive belly than in reality. As a first steptoward generalization, Blom et al. [BAS14] concluded that strong spatial coincidenceof real and virtual body part is not mandatory to produce IVBO.Individual animals, such as scorpions or rhinos in one of our studies, have additionalbody parts that players might want to control. In this respect, prior work [Ehr09;GPE11] confirmed that having an additional arm preserves IVBO and induces a Chapter 4
Perception ouble-touch feeling. Steptoe et al. [SSS13] reported the effects of IVBO uponattaching a virtual tail-like body extension to the user’s virtual character. Thesefindings are relevant for a plethora of real and fictive nonhumanoids, such as dragons.The authors also discovered higher degrees of IVBO when we synchronize the tailmovement with the real body.Naturally, we need a way to measure and compare the IVBO effect in order toinvestigate whether and how this phenomenon influences our experience in VR. Inthis regard, the recent work by Roth et al. [Rot+17] introduced the alpha IVBOquestionnaire based on a mirror scenario. The authors suggest acceptance, control,and change as the three factors that determine IVBO. In our initial experiments, weadministered the proposed questionnaire as we were curious to see how it performsfor animal avatars. Subsequently, we relied on this questionnaire in all our IVBOexperiments to maintain comparability throughout our study results.
The first part of our research [KCK18; KCK19] investigated possible control mech-anisms for nonhumanoid avatars and the related levels of IVBO. In general, bodyownership requires as much sensory feedback as possible to induce proper levels ofIVBO. However, providing such cues is challenging for nonhuman characters becausethere is no straightforward mapping of controls—animals come in various shapesand postures. For instance, bats share a human posture and skeleton but have scaledarms or legs, i.e., they differ in terms of proportions. Tigers and dogs have an almosthuman skeleton, including the same number of limbs, but they walk on all fours.Other species, such as a spider, show a completely different skeleton and differ in thelimb count. Our publication suggests the following control approaches for animalavatars to cover these different degrees of anthropomorphism:
First-person full-body tracking.
The posture of the user is mapped 1:1 to the wholevirtual body (cf. Figure 4.1). In this mode, being a tiger implies that we have tocrouch on the floor. From a technical perspective, this approach usually requires
Fig. 4.1:
Three virtual animals, their controls in full-body tracking mode, and the humanavatar that was used as the reference for our IVBO comparisons.
Nonhumanoid Avatars in VR ig. 4.2: The mirror scenario used to assess IVBO for nonhumanoid avatars in first-person(left) and third-person (right) modes. an additional tracking of our hip and ankles, which we can easily achieve withmainstream devices, such as Vive trackers [Cor19].
First-person half-body tracking.
For particular creatures, a full-body mappingmight be too exhausting. Therefore, we designed an alternative that allows us toremain in an upright position while our lower body is mapped to all of the animal’slimbs. In the case of a tiger, each of our legs corresponds to two of the animal’spawns. Hence, we preserve the sensory feedback and keep the physical effortminimal comparted to full-body tracking.
Third-person approaches.
In some applications, we control our avatars from athird-person perspective, i.e., we look over the shoulder of our virtual representation.However, this perspective is challenging when we turn around in VR. There aretwo ways to handle such rotation. We can either rotate ourselves, i.e., the camera,around the avatar, or vice versa. Camera rotation results in a virtual translationthat does not correspond to our physical movement, which, as we remember, is apotential cause of cybersickness. Rotating the avatar can be done in various ways,e.g., by simply sliding the animal sideways around us or by using an agent-likebehavior that tries to reposition the animal via natural avatar movement.We relied on a mirror scenario (cf. Figure 4.2) to measure the IVBO effects of ourcontrol approaches for a tiger, a bat, and a spider. We chose the animals such thatthey differ from humanoids in IVBO-critical domains, i.e., shape (bat), skeleton(spider), and posture (tiger, spider). With this, we gathered the following insights:• IVBO works for animals and even outperforms humanoid avatars in certaincases (e.g., bat)• similarly to human avatars [Gal+15], first-person modes for animals outper-form third-person approaches regarding IVBO• half-body approaches are a compromise between IVBO and exhaustion: theyreduce fatigue for non-upright animals without a noticeable sacrifice of IVBO• users enjoy the superhuman abilities (e.g., flying) that come along with acertain animal or an additional body part. Chapter 4
Perception .1.3 Beastly Escape: Nonhumanoid Player Experience
It is one thing to be aware of nonhumanoid IVBO in virtual environments, but it isanother matter to apply this phenomenon in favor of the user—or, in our example,the player. Unfortunately, there are very few studies on creature embodiment in VR,which makes it difficult for game designers to predict whether and how players willperceive animal avatars. As only a few games have touched upon this topic, bestpractices and design guidelines for such avatars are also lacking.We contribute to this topic by an in-depth exploration of the design space, thebenefits, and the limitations of animal avatars in VR games. Prior work indicatesthat such research should not be overgeneralized, because animals vary significantlyamong themselves: they differ in posture, their (loco)motion, and often have entirelydifferent skeletons. Hence, in our work [Kre+19b], we decided to maintain a clearfocus on a few specific representatives to gather sufficient knowledge regarding howwe can embed such avatars in a gaming context.In particular, our publication presented the game creation pipeline for escape roomgames involving three types of animals: a rhino, a bird, and a scorpion (cf. Fig-ure 4.3). The escape room genre was chosen to allow locomotion via natural walking,which removes the need for additional navigation techniques, such as teleportation.As depicted in Figure 4.4, each game focuses on a different control approach andequips players with a “superhuman” skill that is typical for the respective animal.For instance, in the role of a bird, players have to use their virtual wings to fly andto create gusts of wind for object movement. Being a rhino allows players to use thehorn for tricky object interactions, such as lifting and removing a lock through thecage bars. A scorpion offers even more unique interactions: in our game, players canuse the tail and claws to cut their way through a labyrinth and defeat an end boss.
Fig. 4.3:
Our beastly escape rooms revealed that players enjoy the control over additionalbody parts that allow novel interactions and enable superhuman abilities.
Nonhumanoid Avatars in VR ig. 4.4: The control approaches utilized in our example games.
Rhino (full-body mapping):players have to stay on all fours; head movement controls the horn.
Scorpion (half-body mapping): players remain in an upright posture and use the controllersto open and close the claws and to initiate a tail strike.
Bird (full-body mapping):players use their virtual wings to fly and create gusts of wind.
Our publication provides a detailed discussion of the underlying decision-makingprocess and reports an evaluation of the resulting games concerning IVBO and theoverall player experience. We summarize the key messages as following:• IVBO correlates with player experience and is an important factor in nonhu-manoid VR avatar design• games should emphasize the animal’s specific characteristics and abilities• players have no trouble controlling additional and exotic body parts• only directly controllable body parts should be visualized• controls (e.g., full-body vs. half-body) should be designed based on the game-related animal abilities and the target audience.Our experiments demonstrated that, by a smart choice of avatars, VR games couldallow us to collect impressions and experiences that would not be possible or wouldbe far less engaging in a less immersive setup. To put it another way, nonhumanoidavatars profit substantially from the current VR technology. One reason is thecorrelation between IVBO, presence, and game enjoyment. Hence, as follow-upresearch, we suggest disentangling these relations in detail. We assume that furtherexploration will encourage researchers and practitioners to consider IVBO as ahelpful tool that allows the creation of novel and engaging virtual experiences.Furthermore, we propose to study nonhumanoid IVBO beyond games. In particular,this phenomenon might have an impact on our empathy for animals and naturein general, or we could use it as an in virtuo exposure method [CHW97; Hof+03;Gar+02; Bou+06; Hof98; BWB14] to combat specific animal-related fears, such asarachnophobia. Chapter 4
Perception .2 Preattentive Highlighting in VR Visualizations
Animal avatars were just one example of how we can benefit from perceptionphenomena in VR. In this section, we continue the exploration of such phenomenaand take a closer look at the area of visualization—an application domain that iswell aware of the advantages that VR has to offer [Kra+06; SBN08].Our primary contribution is
Deadeye [KK19]—a preattentive highlighting techniquefor visualizations, i.e., an approach that allows us to notice an object of interestwithin a split second. The underlying idea is easy to explain: we highlight an objectof interest by rendering it for one eye only. Technically, this is easy to achieve withan HMD, which makes this approach an ideal candidate for VR visualizations. Thefollowing sections provide a brief insight into the related research in the followingmanner: Firstly, we assess the status quo of visualization literacy [KMK19]. Secondly,we lay a foundation by establishing
Deadeye for 2D visualizations [KK19]. Thirdlyand finally, we transfer our approach to virtual environments and integrate
Deadeye into real-world VR visualizations [Kre+19a].
In the course of this synopsis, we often regarded VR technology as somethingmainstream, i.e., as something that has an impact on our daily life. One particularreason was the overarching application area, i.e., most of our contributions canbe used for entertainment purposes. In contrast, visualization—and visualizationcreation in particular—is usually considered a “serious” application area limited toexperts.Nevertheless, the demand for people being capable of creating meaningful andengaging visualizations outgrows the offer rapidly. Accordingly, the skills to under-stand and generate visual representations are more crucial than ever—skills that weoften subsume under the general term visualization literacy [LKK16]. Hence, beforetackling the particular issue of visual highlighting, this section explores how readythe mainstream is for visualization and how we can improve this status quo througheducation.Our particular contribution [KMK19] is an explorative study on the visualizationabilities of novices and a course design that encourages the creation of truly engagingvisualizations. The book
Dear Data [LP16] motivated our course. The visualizationsin that book were created by composing visualization knowledge and creativity, andwe wanted to know if students could be motivated to produce similar results interms of comprehension [LRC12] and engagement [Bat+10; Bor+13]. Therefore,we emphasized design thinking [PN18; CKJ13; Gol94; CDC96; Bro10; RS12] and
Preattentive Highlighting in VR Visualizations ig. 4.5: Example visualizations on the topic “water”. Novices experimented with differentapproaches: day-by-day visualizations vs. aggregations, digital vs. analog, amountof tracked attributes, visual clutter, and differing topic interpretations. hands-on exploration of the visualization space [RHR16] to prevent students fromany kind of tunnel vision that we often attribute to visualization novices.During the course, our participants had to track certain data each week and createa meaningful visualization at the end of the week (cf. Figure 4.5). Interviews andonline surveys accompanied our course in order to collect insights into the thinkingprocess and pain points of visualization novices. Apart from the extracted creationprocess, as shown in Figure 4.6, we outline the following key observations:• design thinking motivates novices to experiment with a broader range ofvisualization methods• novices often skip the data analysis step and go straight to visualization• novices tend to ignore the aspect of memorability [Bor+13; Bat+10]• novices perceive collaborative visualization as more complex and less funTo summarize, the enrichment of traditional teaching by basic design thinking prin-ciples (e.g., divergent thinking, brainstorming) encourages novices to explore andlearn a plethora of visualization techniques without falling into a tunnel-visionpattern. We suggest that such novice-oriented approaches can increase the over-all visual literacy of the broad masses, which, in turn, is also beneficial for VRvisualizations—be it on the part of consumers or producers.
Fig. 4.6:
The nonlinear, iterative visualization creation approaches of our students. Chapter 4
Perception .2.2 Highlighting and Preattentive Visual Features
As we have seen in the example of
Dear Data , visualization is an indispensable partof modern communication. Designing comprehensive visualizations requires a deepunderstanding of how our visual perception works [NS71; IK01; Yar67]. In otherwords, making efficient use of particular visual characteristics helps us to createvisualizations that excel in their usability and performance.The following sections focus on one aspect of visual perception, namely highlight-ing, and its interplay with VR technology. Highlighting allows us to draw andguide [War12; Hal+16; BI13] the attention of users to a particular object of interest.One well-known example is the search function of a web browser: it uses color tohighlight the occurrence of a query, which allows us to locate the results instantly. Amore advanced example is a medical visualization that utilizes flickering to highlightsuspicious cells or tissue and helps doctors with exploring the data. Cues such ascolor, flickering, shape, size, and motion are examples of so-called preattentive visualfeatures [HE12]. Our visual system can detect such features in a glance, i.e., beforeour eyes initiate a saccadic movement. Consider the example in Figure 4.7: lookingat such an image for a split second would suffice to tell whether or not there was ared circle among blue ones. Since a saccade usually needs about 200-250 ms [HE12]to initiate, researchers utilize that threshold to determine if a cue is preattentive.One important property of preattentive cues is that they perform equally well withan increasing number of distractors. Those features are processed in parallel byour visual system and are not searched serially. That property is crucial when werevise our example with a full-page text search or the exploration of a huge medicaldataset. Hence, advances in the exploration of preattentive features can providesubstantial benefits [Wal+14; Suh+02; Col+06; Alp+11; GCC17] to visualizationresearchers and practitioners.
Fig. 4.7:
Left image: the target object is a red circle among blue distractors and can berecognized preattentively. Right image: the target object is also an outlier – eithera blue square or a red circle (conjunction search). We have to search each objectin a serial fashion to find the target, i.e., no preattentive processing is possible.
Preattentive Highlighting in VR Visualizations .2.3 Deadeye: Dichoptic Presentation for Highlighting The core idea of our work on
Deadeye [KK19] is to highlight an object by hiding it forone eye. We refer to such a principle when each eye is exposed to a different stimulusas dichoptic presentation . In general, that difference in stimuli leads to binocularrivalry [LLS96; Bla89; Fri12; AB05; Paf+11], i.e., our vision system enters a context-switching mode that allows us to perceive both monocular images alternately insteadof experiencing a superimposition. Whether or not we can perceive binocular rivalryin a preattentive manner has been discussed in several prior works [WF88; DL74;TG67; AH98; FM09]. The prevailing opinion is that this phenomenon is usuallytoo weak and overridden by more pronounced features [Zou+17]. Consequently,a dichoptic presentation has only rarely been employed in visualization or forhighlighting purposes in general [ZCZ12; Zha14]. On the other hand, research byPaffen et al. [PHV12] and especially the work by Zhaoping [Zha08] has providedfurther evidence that we should reconsider binocular rivalry as a preattentive cue.Our proposed approach has a unique advantage over existing highlighting methods:
Deadeye does not modify any visual properties of the target and, thus, is particularlysuited for visualization applications (cf. Figure 4.8). In contrast, all establishedcues have to alter the target in one way or another – be it reshaping, recoloring, orintroducing a motion. Such changes in appearance can lead to data misinterpretation.Furthermore, reserving a whole visual dimension, such as color or position, forhighlighting is an expensive tradeoff.We verified our idea using a traditional evaluation approach for preattentive cues.Typically, a series of images are displayed for a short amount of time (100-250ms), and participants have to decide for each image whether a highlighted objectis present or not. A preattentive feature is characterized by a high success rateindependently from the number of overall objects, also called distractors, in theimage. In addition, we also explored the performance of
Deadeye in a so-calledconjunction search scenario (cf. Figure 4.7) [TG80; TG88; TS86; WCF89; NS86] by
Fig. 4.8:
Applicability of
Deadeye : highlighting of lines in a line chart (left) and visualstorytelling in a scientific visualization of chemical reactions (right). Chapter 4
Perception ig. 4.9:
Examples of VR visualizations that can benefit from our contribution. (a) Ed-ucational visualizations of particle physics [DPG18]:
Deadeye can be used tocapture and guide the attention of the students. (b) Immersive graph visualiza-tions [Kwo+16]: utilizing
Deadeye during user interaction to highlight the selectedvertices and edges. (c) Dinosaur track formation [Nov+19]: emphasizing 3Dpathlines of interest in unsteady flow visualizations. combining our technique with color as a second cue. Overall, our results allowed usto draw the following conclusions:•
Deadeye works preattentively•
Deadeye does not lead to headache or any other physical strain• in contrast to the depth cue [NS86],
Deadeye can not be processed in parallelwhen combined with other preattentive features• the performance of
Deadeye decreases with an increasing distance from thefocus point, i.e., it is less robust in the peripheral area.The weak spot of
Deadeye is its dependence on stereo equipment because we haveto render different images for each eye. Although the corresponding hardwarebecame a commodity in recent years (e.g., 3D glasses, stereo projectors, 3D TVs), thismandatory requirement leads to an additional effort. For this reason, a more effectivesolution would be to transfer
Deadeye to a natively stereoscopic environment, namelyVR, and to apply our approach for 3D visualizations.
As a follow-up to our original contribution, we explored
Deadeye as a highlightingtechnique for visualizations in VR [Kre+19a], because such stereoscopic scenariossupport dichoptic presentation out of the box. There are manifold reasons forvisualization in VR, such as a better understanding of spatial relationships [SB07]or the increased presence. Hence, we also need robust and intuitive highlightingtechniques. While specific preattentive cues, such as color, are not affected by thetransition to VR, temporal approaches, such as flickering, often interfere with aliasingcaused by constant micromovements in VR. To put it another way, establishing
Deadeye as a highlighting method in VR without occupying any additional visualdimension offers significant benefits for the visualization community, as depicted inFigure 4.9.
Preattentive Highlighting in VR Visualizations e could not assume the applicability of our technique in VR as a given fact.Firstly, dichoptic presentation is a rather subtle cue that might be overridden bymore pronounced features [Zou+17]. Secondly, our method suppresses binoculardisparities for the target object. However, our vision utilizes the binocular disparitygenerated by the horizontal offset of our eyes to gather depth information, whichforms the basis for our stereo perception [Jul60; Jul71; CB15; M+76; MP79].Therefore, we performed an in-depth evaluation, which revealed the followingresults:• the preattentiveness of Deadeye is preserved in VR•
Deadeye performs robust under heterogeneous conditions, i.e., when distractorsvary in protruding properties such as color or shape (cf. Figure 4.9)• depth perception for highlighted objects is still possible due to occlusiongeometry [TWA12] and multi-perspective observation [SSN88].A second contribution of our publication is an example integration of
Deadeye intoVR volume rendering [Kra+06; SBN08], as depicted in Figure 4.10. Along with aGPU-based implementation outline, the manuscript exposes a qualitative survey thatdemonstrates the benefits and limitations of our approach. According to our results,
Deadeye is particularly advantageous in non-greyscale scenarios and, in contrast totemporal approaches, does not suffer from typical VR-related issues such as aliasing.Furthermore, our participants noticed that the highlighted object could be fadedin or out depending on focus. Simply put, concentrating on something behind thetarget allows us to suppress the
Deadeye target completely and see through it. Hence,we suggest investigating this multistable perception phenomenon in detail, as itmight have use-cases beyond highlighting, be it for visualizations or VR applicationsin general.
Fig. 4.10:
Left: our experiments in VR with homogeneous and heterogeneous distractors.Right: VR volume rendering of medical datasets as our evaluated applicationscenario for
Deadeye . Chapter 4
Perception .3 Concluding Comments on Perception
The approaches outlined in this chapter slightly differed from our research onlocomotion and interaction. In the previous chapters, our contributions were mainlymotivated by a shortcoming of current VR setups, be it the limited walking space ina room-scale VR environment or the missing physical feedback of a virtual object.Hence, we proposed methods to improve the status quo of VR by making virtualenvironments easier to travel and more interactive overall.In contrast, our methodology concerning perception followed the principle of am-plifying the strong points of VR setups. Thanks to the high degree of immersionthat VR has to offer, we were able to apply the illusion of virtual body ownership tononhumanoid avatars, which would be rather hard (or even impossible) to achievein a desktop environment. Furthermore, we utilized a dichoptic presentation toenhance visualizations with a preattentive highlighting method that does not modifyany visual properties of the target object. Hence, visualizations in VR benefit from
Deadeye out of the box, i.e., without the additional hardware requirements of atypical desktop scenario.We covered only a tiny fraction of (visual) phenomena that are possible in a virtualenvironment. Nevertheless, again, the bottom line is that such investigations aimat taking advantage of VR here and now, which further increases the applicationpossibilities of immersive setups. Hence, as researchers, we have to ask ourselveswhether and how we can operationalize the search for such phenomena. From ourperspective, it is more promising to start in VR and to explore all nuances of virtualexperiences, instead of porting existing approaches from desktop to VR, as is oftendone—with little success—with digital games.This “VR first” paradigm, combined with divergent thinking (cf. our work on
Dear Data ), is something that can not be emphasized often enough concerningour perception in VR. With that, we do not only mean the exploration of real-world inspired perception phenomena: certain unrealistic perceptions, such as beingan animal, are only possible in VR, which further underpins the diversity andadvantageousness of such setups.
Concluding Comments on Perception Conclusion
Throughout this synopsis, we explored three different areas of VR research: locomo-tion, interaction, and perception. We began by classifying the different movementapproaches in VR. We outlined stationary approaches, such as gamepad or gaze-based controls, and summarized the benefits of walking-inspired techniques, such aswalking in place or redirected walking. We then focused on virtual body resizingas one viable option to bring natural walking into our living rooms. Based on our
GulliVR and
Outstanding publications, we exposed how to use player rescaling toincrease presence and spatial orientation while removing the risk of cybersickness.Our stay in a virtual environment seldom remains limited to just walking—hence,in the next step, we presented various ways to enhance and facilitate interactionin VR. We proposed a self-transforming controller to increase presence and playerenjoyment in VR games and embedded this topic in a more general discussion aboutobject representations and inventories. Apart from these object-related contributions,we investigated gestures as an essential entity in interaction research. We emphasizedthe audience and created 3D gestures that are easy to understand and to predictby an observer, be it during a presentation, a software demo, or for entertainmentpurposes. Finally, we introduced
MorphableUI , a distributed hypergraph-basedsystem that facilitates the integration and usage of diverse I/O modalities in andoutside VR, such as the aforementioned physical proxies and gestures.In the case of interaction and locomotion, we mainly dealt with the shortcomingsof VR. For instance, we addressed issues such as the limited space for walking orthe (mostly) missing haptics. Our contributions in the field of perception emphasizethe strong points of immersive setups. We have transferred the illusion of virtualbody ownership to nonhumanoid avatars and exploited this phenomenon to cre-ate engaging and novel gaming experiences with animals in the leading role. Asa further example, we introduced
Deadeye and demonstrated how to utilize thedichoptic presentation capability of immersive setups for preattentive highlightingin visualizations. In particular, our technique allows guiding the attention to anobject of interest with the unique benefit of not modifying any visual property of thetarget.With these accomplishments in mind, we dedicate the remaining sections of thischapter to a critical assessment of our contributions and their potential regardingfollow-up research. We round off the synopsis by presenting our concluding thoughtson VR research in general and outlining our lessons learned along the way. .1 Critical Assessment of the Contributions andPossible Future Work We outlined 16 manuscripts in this synopsis. It is fair to assume that they differregarding their overall impact on the status quo of VR and computer science researchin general. Assessing the significance of a contribution shortly after the publicationdate is somewhat speculative, and the rating of own works is always prone topersonal bias. Nevertheless, such an initial judgment helps to complete the bigpicture of this dissertation.To maintain focus, we do not discuss each contribution in detail. Instead, for eachresearch area, we outline the—from our point of view—most significant achieve-ment(s), reason our choice, and provide possible follow-up research questions.
Locomotion.
Our major achievement in this area is the introduction of the virtualbody resizing concept. The award-winning
GulliVR publication (
CHI Play 2018honorable mention ) demonstrates how an appropriate adaptation of the virtual eyedistance can be used to move at an increased pace without the risk of cybersickness.Although prior research reported that an increased stereo base makes objects appearnearer and smaller [Ren+15; HGE11], we are not aware of any previous attemptsto utilize this phenomenon as a viable countermeasure for locomotion-induced cy-bersickness. Our core idea is applicable beyond games and entertainment, which isreflected by the diversity of citing sources [Ioa+19; DW19] and follow-up publica-tions [Abt+19; CKK19a; CKK19b]. We see two possible research directions basedon
GulliVR : Firstly, the dynamic resizing approach has a straightforward use-case inmultiscale virtual environments [AM16; ZF02; Kop+06], be it for urban planning ormedical explorations. Secondly, we suggest further experiments on the significantlyaltered virtual eye separation. We believe that the induced change in the perceptionof object size and distance is a subtle, yet powerful phenomenon that creates oppor-tunities for several novel VR experiences, such as the miniature world look in thecase of
Outstanding [CKK19a; CKK19b].
Interaction.
In the respective chapter, we concluded that most of our results arealso applicable outside VR. However, as our assessment focuses on VR research, weemphasize our work on self-transforming controllers as an essential contribution inthis area. We regard this publication as a significant milestone because of our findingsrelated to weight shifting and weight perception in VR. In contrast to the pre-existingopinion that a physical proxy has to mimic the virtual object as close as possible, ourresearch demonstrated that such proxies could (and should) be significantly lighterand handier. Hence, we have shown that most of the commercially available VR guncontrollers are on the wrong path, as they aim to replicate the exact look and feel ofthe real gun, which is unnecessary and even counterproductive. More importantly, Chapter 5
Conclusion e have determined that a slight shift of weight is sufficient to convey the impressionof holding a completely different object (or gun in our case), which paved the wayfor further experiments on self-transforming controllers [ZK19]. Since haptics playsa crucial role in our virtual experience, we argue that research on similar morphablecontrollers significantly advances the status quo of VR. As future work, we suggest abroader exploration of such transforming devices by including additional parametersaside from weight distribution. For instance, we can modify air resistance, as inthe case of
Drag:on by Zenner et al. [ZK19], or add a tactile surface representationsimilar to the
Haptic Revolver by Whitmire et al. [Whi+18].
Perception.
We outline two significant contributions—VR Animals in the enter-tainment sector and
Deadeye in the area of visualization. Our work on animalavatars pioneered the concept of nonhumanoid body ownership in VR. In multipleexperiments, we confirmed that IVBO applies to avatars that are very differentfrom human beings. These insights allowed us to create novel, engaging virtualexperiences with animals in the primary role. Our research promotes the injectionof superhuman abilities, such as flying or using a virtual horn, into VR games asa central component of player experience. Although our publications emphasizedigital games, we assume that animal embodiment is also a viable enhancement forother domains. In particular, we hypothesize that being an animal in VR increasesour empathy for animals and might be applicable as an in virtuo exposure methodto combat animal-related fears.In contrast to this certainly unusual topic, our work on
Deadeye , an
IEEE VIS 2018best paper award publication, is characterized by its simplicity and straightforwardadvantages for everyday visualizations. The discovery of a novel preattentive visu-alization method is a rare occasion. Together with the fact that
Deadeye does notmodify any visual property of the target, it is reasonable to assume a high significanceof this contribution. The applicability of our method in VR is what brings out the fullpotential of
Deadeye because dichoptic presentation comes out of the box in suchsetups. As a result, we get a (less than) zero-overhead highlighting method thatis straightforward to implement while preserving all attributes of the target object.As a next step, we suggest to go beyond visualizations and to evaluate
Deadeye forgraphical user interfaces in VR, as they heavily rely on highlighting and attentionguidance and would supposedly benefit from the non-invasiveness of our method.
Our synopsis began with the question of whether (our) VR research is “worth it”.In the previous section, we outlined our most significant contributions and theirpotential impact. However, final publications are not everything—along the way, weencountered several hidden obstacles, failed at particular challenges, and gained
VR Research and the Lessons We Learned cientific maturity. Hence, as the last words of the synopsis, we summarize theessential lessons that we have learned during our VR research. VR first instead of fixes and ports.
We noticed that VR is often treated as anextension of conventional desktop-based computing. We hear people talking about“porting an application to VR” or reading changelogs of games stating that a particularelement, such as the user interface or locomotion, “ was adapted to VR”. From ourexperience, this approach of taking an application, rendering it in stereo, i.e., in VR,and then asking oneself how to fix the countless usability issues is worrying, sayingthe least. Not only that such fixes are difficult or sometimes even impossible toachieve—this patching process leaves a certain after-taste and increases the overallskepticism regarding the maturity of VR. Moreover, how often have we witnessedapplications that remained unusable in VR, no matter how many patches wereapplied? We argue that, as researchers and practitioners, we must distance ourselvesfrom this “port-and-fix” mindset. If we really want to create meaningful experiencesin VR, we have to embrace this entity as a whole, with all its benefits and drawbacks,which often means starting from scratch. Do not get us wrong here, most principlesof human-computer interaction still apply to virtual environments, and that bringsus to the next point:
Reinventing the wheel in VR.
For years, VR research is booming. Related confer-ences, such as the
IEEE VR , went from meet-ups with a few hundred attendees tomulti-track symposiums with thousands of visitors. Conferences on human-computerinteraction, such as the
ACM CHI , dedicate multiple sessions to VR research nowa-days. This seemingly rapid progress has both advantages and disadvantages. On theone hand, the increasing number of scientists is, without a doubt, an essential factorfor the sustainable success of VR. More workforce results in more ideas and broaderdissemination of the respective findings thanks to our lively, vibrant community. Onthe other hand, we noticed that VR research is occasionally accompanied by somesuperficiality—instead of diving into the literature on stereoscopic environmentsfrom the nineties, we tend to overlook previous efforts and claim novelty just becausenobody tried our method before with an up-to-date VR HMD.Another example of reinventing the wheel is to (re-)publish methods and applicationsby adding in VR to the title. Not all tasks are automatically better in a virtualenvironment—we would even go so far as to say that most daily use-cases are betteroff without VR. We should always ask ourselves, whether the VR-induced benefits toour application justify the fact that people have to put on a slightly uncomfortableHMD, potentially experience cybersickness, and lose the advantages of a desktopenvironment, such as efficient text input. For instance, we needed VR in our animalavatar research, as we built upon the virtual body ownership illusion. Otherwise,a non-VR setup would have enabled such nonhumanoid experiences for a wideraudience. This example also introduces the last lesson we want to share: Chapter 5
Conclusion imulations vs. unrealistic experiences.
Certainly, we can utilize the providedimmersion of VR setups for ultra-realistic simulations. We can design our virtualstay to match our real life as close as possible. However, in our opinion, one of thegreatest strengths of VR is to deliver unrealistic experiences. In VR, we are facedwith unique opportunities and activities that are not available to us in the real world.In our experiments, the participants were particularly attracted by various unrealistic actions like flying or interacting with a miniaturized world as a giant. Ultimately, webelieve that such novel experiences are vital to the success of VR because curiosity isan integral part of human nature.
VR Research and the Lessons We Learned ibliography [AB05] David Alais and Randolph Blake. Binocular rivalry . MIT press, 2005 (cit. onp. 36).[Abt+19] Parastoo Abtahi, Mar Gonzalez-Franco, Eyal Ofek, and Anthony Steed. „I’ma Giant: Walking in Large Virtual Environments at High Speed Gains“. In:
Proceedings of the 2019 CHI Conference on Human Factors in ComputingSystems . ACM. 2019, p. 522 (cit. on pp. 12, 42).[AH98] Stuart Anstis and Alan Ho. „Nonlinear combination of luminance excursionsduring flicker, simultaneous contrast, afterimages and binocular fusion“. In:
Vision Research
Journal of Computer-MediatedCommunication
IEEETransactions on Visualization and Computer Graphics
Gesture-Based Communication in Human-Computer Interaction: 5th Interna-tional Gesture Workshop, GW 2003, Genova, Italy, April 15-17, 2003, SelectedRevised Papers . Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, pp. 421–435 (cit. on p. 21).[AM16] Ferran Argelaguet and Morgant Maignant. „GiAnt: stereoscopic-compliantmulti-scale navigation in VEs“. In:
Proceedings of the 22nd ACM Conferenceon Virtual Reality Software and Technology . ACM. 2016, pp. 269–277 (cit. onpp. 10, 42).[BAS14] Kristopher J Blom, Jorge Arroyo-Palacios, and Mel Slater. „The effects ofrotating the self out of the body in the full virtual body ownership illusion“.In:
Perception
Proceedings of the SIGCHI Conference on Human Factors in Computing Sys-tems . CHI ’10. Atlanta, Georgia, USA: ACM, 2010, pp. 2573–2582 (cit. onpp. 33, 34). Bat+13] Ligia Batrinca, Giota Stratou, Ari Shapiro, Louis-Philippe Morency, and StefanScherer. „Cicero - Towards a Multimodal Virtual Audience Platform for PublicSpeaking Training“. In:
International Conference on Intelligent Virtual Humans .Lecture Notes on Computer Science. Edinburgh, UK, Aug. 2013, pp. 116–128(cit. on p. 21).[Bau+01] M. Bauer, B. Bruegge, Gudrun Klinker, et al. „Design of a component-basedaugmented reality framework“. In:
Augmented Reality, 2001. Proceedings. IEEEand ACM International Symposium on . 2001, pp. 45–54 (cit. on p. 22).[BC98] Matthew Botvinick and Jonathan Cohen. „Rubber hands ‘feel’touch that eyessee“. In:
Nature
Environment and Behavior
Aerospace and ElectronicsConference, 1996. NAECON 1996., Proceedings of the IEEE 1996 National .Vol. 1. IEEE. 1996, pp. 429–434 (cit. on p. 11).[BGS13] Domna Banakou, Raphaela Groten, and Mel Slater. „Illusory ownership of avirtual child body causes overestimation of object sizes and implicit attitudechanges“. In:
Proceedings of the National Academy of Sciences
IEEETransactions on Pattern Analysis and Machine Intelligence
Trends in cognitive sciences
Psychological review
Proceed-ings of the SIGCHI Conference on Human Factors in Computing Systems . CHI’13. Paris, France: ACM, 2013, pp. 685–694 (cit. on p. 17).[Bol17] Costas Boletsis. „The new era of virtual reality locomotion: a systematicliterature review of techniques and a proposed typology“. In:
MultimodalTechnologies and Interaction
IEEE Transactions on Visualization and Computer Graphics Bibliography
Bou+06] Stéphane Bouchard, Sophie Côté, Julie St-Jacques, Geneviève Robillard, andPatrice Renaud. „Effectiveness of virtual reality exposure in the treatment ofarachnophobia using 3D games“. In:
Technology and health care
Proceedings of the 2016 Symposium on Spatial User Interaction . ACM. 2016,pp. 33–42.[Boz+16b] Evren Bozgeyikli, Andrew Raij, Srinivas Katkoori, and Rajiv Dubey. „Point& teleport locomotion technique for virtual reality“. In:
Proceedings of the2016 Annual Symposium on Computer-Human Interaction in Play . ACM. 2016,pp. 205–216 (cit. on p. 12).[Bro+15] Michael Brown, Aidan Kehoe, Jurek Kirakowski, and Ian Pitt. „Beyond thegamepad: HCI and game controller design and evaluation“. In:
Game UserExperience Evaluation . Springer, 2015, pp. 263–285 (cit. on p. 17).[Bro10] Frederick P. Brooks.
The Design of Design: Essays from a Computer Scientist .1st. Addison-Wesley Professional, 2010 (cit. on p. 33).[BSH09] Gerd Bruder, Frank Steinicke, and Klaus H Hinrichs. „Arch-explore: A naturaluser interface for immersive architectural walkthroughs“. In: (2009).[BWB14] S Bouchard, BK Wiederhold, and J Bossé.
Advances in Virtual Reality andAnxiety Disorders . 2014 (cit. on p. 32).[Car16] Jorge Cardoso. „Comparison of gesture, gamepad, and gaze-based locomotionfor VR worlds“. In:
Proceedings of the 22nd ACM Conference on Virtual RealitySoftware and Technology . ACM. 2016, pp. 319–320.[CB07] Kathy Charmaz and Linda Liska Belgrave. „Grounded theory“. In:
The Black-well encyclopedia of sociology (2007) (cit. on p. 19).[CB15] Baptiste Caziot and Benjamin T. Backus. „Stereoscopic Offset Makes ObjectsEasier to Recognize“. In:
PLoS ONE
Handbook of digital games (2014),pp. 337–361 (cit. on p. 7).[CDC96] 1942- Cross Nigel, Kees Dorst, and Henri Christiaans.
Analysing design activity .English. Conference proceedings. Chichester New York Wiley, 1996 (cit. onp. 33).[CHW97] Albert S Carlin, Hunter G Hoffman, and Suzanne Weghorst. „Virtual realityand tactile augmentation in the treatment of spider phobia: a case report“. In:
Behaviour research and therapy
Proceedings of the18th ACM International Conference on Multimodal Interaction . ICMI 2016.Tokyo, Japan: ACM, 2016, pp. 129–136 (cit. on p. 21).
Bibliography CKJ13] YoungJoong Chang, Jaibeom Kim, and Jaewoo Joo. „An Exploratory Studyon the Evolution of Design Thinking: Comparison of Apple and Samsung“. In:
Design Management Journal
Extended Abstracts of the 2019 CHI Conference on Human Factors in Com-puting Systems . CHI EA ’19. Glasgow, Scotland Uk: ACM, 2019, LBW1612:1–LBW1612:6 (cit. on pp. 7, 12, 13, 42).[CKK19b] Sebastian Cmentowski, Andrey Krekhov, and Jens Krüger. „Outstanding: AMulti-Perspective Travel Approach for Virtual Reality Games“. In:
Proceedingsof the Annual Symposium on Computer-Human Interaction in Play . CHI PLAY’19. Barcelona, Spain: ACM, 2019, pp. 287–299 (cit. on pp. 7, 12, 13, 42).[CKKew] Sebastian Cmentowski, Andrey Krekhov, and Jens Krüger. „"I Packed myBag and in It I Put...": A Taxonomy of Inventory Systems for Virtual RealityGames“. In: Submission to the 2020 CHI Conference on Human Factors inComputing Systems (under review) (cit. on pp. 17, 19).[CLW14] Isaac Cho, Jialei Li, and Zachary Wartell. „Evaluating dynamic-adjustmentof stereo view parameters in a multi-scale virtual environment“. In:
3D UserInterfaces (3DUI), 2014 IEEE Symposium on . IEEE. 2014, pp. 91–98 (cit. onp. 11).[Cme+19] Sebastian Cmentowski, Andrey Krekhov, Ann-Marie Müller, and Jens Krüger.„Toward a Taxonomy of Inventory Systems for Virtual Reality Games“. In:
Extended Abstracts of the Annual Symposium on Computer-Human Interactionin Play Companion Extended Abstracts . CHI PLAY ’19 Extended Abstracts.Barcelona, Spain: ACM, 2019, pp. 363–370 (cit. on pp. 17, 19).[CMR90] Stuart K. Card, Jock D. Mackinlay, and George G. Robertson. „The designspace of input devices“. In:
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems . CHI ’90. Seattle, Washington, United States:ACM, 1990, pp. 117–124 (cit. on p. 22).[Col+06] F. Cole, D. DeCarlo, A. Finkelstein, et al. „Directing Gaze in 3D Modelswith Stylized Focus“. In:
Proceedings of the 17th Eurographics Conference onRendering Techniques . EGSR ’06. Nicosia, Cyprus: Eurographics Association,2006, pp. 377–387 (cit. on p. 35).[Cor19] HTC Corporation.
HTC Vive Tracker . Website. Retrieved March 10, 2019 from . 2019 (cit. on p. 30).[Cuc+12] Stefania Cuccurullo, Rita Francese, Sharefa Murad, Ignazio Passero, andMaurizio Tucci. „A Gestural Approach to Presentation Exploiting MotionCapture Metaphors“. In:
Proceedings of the International Working Conferenceon Advanced Visual Interfaces . AVI ’12. Capri Island, Italy: ACM, 2012, pp. 148–155 (cit. on p. 21).[CW12] Jacek Chmielewski and Krzysztof Walczak. „Application Architectures forSmart Multi-device Applications“. In:
Proceedings of the Workshop on Multi-device App Middleware . Multi-Device ’12. Montreal, Quebec, Canada: ACM,2012, 5:1–5:5 (cit. on p. 22). Bibliography
Dai+13] Florian Daiber, Andrey Krekhov, Marco Speicher, Jens Krüger, and AntonioKrüger. „A Framework for Prototyping and Evaluation of Sensor-based Mo-bile Interaction with Stereoscopic 3D“. In:
ACM ITS Workshop on InteractiveSurfaces for Interaction with Stereoscopic 3D . ISIS3D ’13. St. Andrews, UnitedKingdom: ACM, 2013, pp. 13–16 (cit. on pp. 17, 22).[DCC97] Rudolph P. Darken, William R. Cockayne, and David Carmein. „The Omni-directional Treadmill: A Locomotion Device for Virtual Worlds“. In:
Proceedingsof the 10th Annual ACM Symposium on User Interface Software and Technology .UIST ’97. Banff, Alberta, Canada: ACM, 1997, pp. 213–221.[DF01] P. Dragicevic and Jean-Daniel Fekete. „Input Device Selection and InteractionConfiguration with ICON“. In:
Proceedings of the HCI01 Conference on Peopleand Computers XV . Springer, 2001, pp. 543–558 (cit. on p. 23).[DH06] R. Dachselt and A. Hübner. „A Survey and Taxonomy of 3D Menu Techniques“.In:
Proceedings of the 12th Eurographics Conference on Virtual Environments .EGVE’06. Lisbon, Portugal: Eurographics Association, 2006, pp. 89–99 (cit. onp. 22).[DJA93] Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. „Wizard of Oz Studies: Whyand How“. In:
Proceedings of the 1st International Conference on IntelligentUser Interfaces . IUI ’93. Orlando, Florida, USA: ACM, 1993, pp. 193–200(cit. on p. 21).[DL74] Ch MM De Weert and Willem Johannes Maria Levelt. „Binocular bright-ness combinations: Additive and nonadditive aspects“. In:
Perception & Psy-chophysics
IEEE computer graphics and applications
Computer GraphicsInternational Conference . Springer. 2019, pp. 203–215 (cit. on p. 42).[Ehr07] H Henrik Ehrsson. „The experimental induction of out-of-body experiences“.In:
Science
Perception
Computer Gaming World (1992),pp. 81–83 (cit. on p. 2).[Fer+16] Andrea Ferracani, Daniele Pezzatini, Jacopo Bianchini, Gianmarco Biscini,and Alberto Del Bimbo. „Locomotion by natural gestures for immersive virtualenvironments“. In:
Proceedings of the 1st international workshop on multimediaalternate realities . ACM. 2016, pp. 21–24.[FF16] Ajoy S Fernandes and Steven K Feiner. „Combating VR sickness through subtledynamic field-of-view modification“. In:
3D User Interfaces (3DUI), 2016 IEEESymposium on . IEEE. 2016, pp. 201–210 (cit. on p. 8).
Bibliography FM09] Monika A Formankiewicz and JD Mollon. „The psychophysics of detectingbinocular discrepancies of luminance“. In:
Vision research
Visual attention and consciousness . Psychology Press, 2012(cit. on p. 36).[Gal+15] Henrique Galvan Debarba, Eray Molla, Bruno Herbelin, and Ronan Boulic.„Characterizing embodied interaction in first and third person perspectiveviewpoints“. In: . EPFL-CONF-206979. 2015 (cit. on p. 30).[Gar+02] Azucena Garcia-Palacios, Hunter Hoffman, Albert Carlin, TA Furness Iii, andCristina Botella. „Virtual reality in the treatment of spider phobia: a controlledstudy“. In:
Behaviour research and therapy
Proceedings of the 2017 CHI Conference on Human Factors in ComputingSystems . CHI ’17. Denver, Colorado, USA: ACM, 2017, pp. 208–219 (cit. onp. 35).[GJM11] Sukeshini A. Grandhi, Gina Joue, and Irene Mittelberg. „Understanding Natu-ralness and Intuitiveness in Gesture Production: Insights for Touchless Gestu-ral Interfaces“. In:
Proceedings of the SIGCHI Conference on Human Factors inComputing Systems . CHI ’11. Vancouver, BC, Canada: ACM, 2011, pp. 821–824 (cit. on p. 20).[Gol94] Gabriela Goldschmidt. „On visual design thinking: the vis kids of architecture“.In:
Design Studies
Frontiersin Robotics and AI
PloS one
Nursing research
Proceedings of the 9th International Conference on IntelligentUser Interfaces . IUI ’04. Funchal, Madeira, Portugal: ACM, 2004, pp. 93–100(cit. on p. 23).[GWW07] Krzysztof Z. Gajos, Jacob O. Wobbrock, and Daniel S. Weld. „AutomaticallyGenerating User Interfaces Adapted to Users’ Motor and Vision Capabilities“.In:
Proceedings of the 20th Annual ACM Symposium on User Interface Softwareand Technology . UIST ’07. Newport, Rhode Island, USA: ACM, 2007, pp. 231–240 (cit. on p. 23). Bibliography
Hab+17] M.P. Jacob Habgood, David Wilson, David Moore, and Sergio Alapont. „HCILessons From PlayStation VR“. In:
Extended Abstracts Publication of the AnnualSymposium on Computer-Human Interaction in Play . CHI PLAY ’17 ExtendedAbstracts. Amsterdam, The Netherlands: ACM, 2017, pp. 125–135 (cit. onp. 8).[Hai+16] F. Haider, L. Cerrato, N. Campbell, and S. Luz. „Presentation quality as-sessment using acoustic information and hand movements“. In: .Mar. 2016, pp. 2812–2816 (cit. on p. 20).[Hal+16] K. Hall, C. Perin, P. Kusalik, and Carl Gutwin. „Formalizing Emphasis inInformation Visualization“. In:
Computer Graphics Forum . Vol. 35. 3. 2016,pp. 717–737 (cit. on p. 35).[Har+14] Alyssa Harris, Kevin Nguyen, Preston Tunnell Wilson, Matthew Jackoski, andBetsy Williams. „Human joystick: Wii-leaning to translate in large virtualenvironments“. In:
Proceedings of the 13th ACM SIGGRAPH InternationalConference on Virtual-Reality Continuum and its Applications in Industry . ACM.2014, pp. 231–234.[HE12] Christopher Healey and James Enns. „Attention and Visual Memory in Visual-ization and Computer Graphics“. In:
IEEE Transactions on Visualization andComputer Graphics
Pres-ence: Teleoperators & Virtual Environments
Seventh IEEE International Conference on AdvancedLearning Technologies (ICALT 2007) . July 2007, pp. 923–924 (cit. on p. 21).[Hes+12] Luke Hespanhol, Martin Tomitsch, Kazjon Grace, Anthony Collins, and JudyKay. „Investigating Intuitiveness and Effectiveness of Gestures for Free SpatialInteraction with Large Displays“. In:
Proceedings of the 2012 InternationalSymposium on Pervasive Displays . PerDis ’12. Porto, Portugal: ACM, 2012,6:1–6:6 (cit. on p. 20).[Het+90] Lawrence J. Hettinger, Kevin S. Berbaum, Robert S. Kennedy, William P.Dunlap, and Margaret D. Nolan. „Vection and simulator sickness“. In: 2 (Feb.1990), pp. 171–81 (cit. on p. 8).[HGE11] Björn van der Hoort, Arvid Guterstam, and H Henrik Ehrsson. „Being Barbie:the size of one’s own body determines the perceived size of the world“. In:
PloS one
Proceedingsof the ACM Symposium on Virtual Reality Software and Technology . VRST ’03.Osaka, Japan: ACM, 2003, pp. 225–112 (cit. on p. 22).[Hof+03] Hunter G Hoffman, Azucena Garcia-Palacios, Albert Carlin, Thomas A FurnessIii, and Cristina Botella-Arbona. „Interfaces that heal: Coupling real andvirtual objects to treat spider phobia“. In: international Journal of Human-Computer interaction
Bibliography Hof98] Hunter Hoffman. „Virtual reality: A new tool for interdisciplinary psychologyresearch“. In:
CyberPsychology & Behavior
Presence: Teleoperators & Virtual Environments
IEEE Transactions onMultimedia
Nature reviews. Neuroscience
Proceedings of the 2019 CHI Conference on Human Factors in ComputingSystems . ACM. 2019, p. 158 (cit. on p. 42).[IOO16] Miyu Iwafune, Taisuke Ohshima, and Yoichi Ochiai. „Coded Skeleton: Pro-grammable Deformation Behaviour for Shape Changing Interfaces“. In:
SIG-GRAPH ASIA 2016 Emerging Technologies . SA ’16. Macau: ACM, 2016, 1:1–1:2(cit. on p. 18).[Jen+08] Charlene Jennett, Anna L Cox, Paul Cairns, et al. „Measuring and definingthe experience of immersion in games“. In:
International journal of human-computer studies
Proceedings of the ACM Symposiumon Virtual Reality Software and Technology . VRST ’02. Hong Kong, China: ACM,2002, pp. 97–104 (cit. on p. 22).[Jo+17] Dongsik Jo, Kangsoo Kim, Gregory F Welch, et al. „The impact of avatar-owner visual similarity on body ownership in immersive virtual reality“.In:
Proceedings of the 23rd ACM Symposium on Virtual Reality Software andTechnology . ACM. 2017, p. 77 (cit. on p. 28).[Jul60] Bela Julesz. „Binocular Depth Perception of Computer-Generated Patterns“.In:
Bell Labs Technical Journal
Proceedings of the2018 CHI Conference on Human Factors in Computing Systems . ACM. 2018,p. 601 (cit. on p. 28).[KCK18] Andrey Krekhov, Sebastian Cmentowski, and Jens Krüger. „VR Animals: Sur-real Body Ownership in Virtual Reality Games“. In:
Proceedings of the 2018Annual Symposium on Computer-Human Interaction in Play Companion Ex-tended Abstracts . CHI PLAY ’18 Extended Abstracts. Melbourne, VIC, Australia:ACM, 2018, pp. 503–511 (cit. on pp. 27, 29). Bibliography
KCK19] A. Krekhov, S. Cmentowski, and J. Krüger. „The Illusion of Animal Body Own-ership and Its Potential for Virtual Reality Games“. In: . Aug. 2019, pp. 1–8 (cit. on pp. 27, 29).[KE] Andrey Krekhov and Katharina Emmerich. „Player Locomotion in VirtualReality Games“. In:
The Digital Gaming Handbook . Ed. by Roberto Dillon.Boca Raton, Florida, USA: CRC Press Taylor & Francis Group (accepted).Chap. 14 (cit. on pp. 7, 8).[Ken+89] Robert S Kennedy, Michael G Lilienthal, Kevin S Berbaum, DR Baltzley, andME McCauley. „Simulator sickness in US Navy flight simulators.“ In:
Aviation,Space, and Environmental Medicine
Proceedings of the 2015 Annual Symposium onComputer-Human Interaction in Play . CHI PLAY ’15. London, United Kingdom:ACM, 2015, pp. 565–570 (cit. on p. 18).[Kil+12] Konstantina Kilteni, Jean-Marie Normand, Maria V Sanchez-Vives, and MelSlater. „Extending body space in immersive virtual reality: a very long armillusion“. In:
PloS one
Proceedings of the SIGCHIConference on Human Factors in Computing Systems . CHI ’12. Austin, Texas,USA: ACM, 2012, pp. 2885–2894 (cit. on p. 22).[Kit+17] Alexandra Kitson, Abraham M Hashemian, Ekaterina R Stepanova, ErnstKruijff, and Bernhard E Riecke. „Comparing leaning-based motion cueinginterfaces for virtual reality locomotion“. In: . IEEE. 2017, pp. 73–82.[KK19] A. Krekhov and J. Krüger. „Deadeye: A Novel Preattentive Visualization Tech-nique Based on Dichoptic Presentation“. In:
IEEE Transactions on Visualizationand Computer Graphics
Eurographics 2019 - Education Papers . Ed. by Marco Tarini andEric Galin. The Eurographics Association, 2019 (cit. on pp. 27, 33).[Kol95] Eugenia M Kolasinski.
Simulator Sickness in Virtual Environments.
Tech. rep.Army Research Inst. for the Behavioral and Social Sciences, Alexandria, VA,1995 (cit. on p. 8).[Kop+06] Regis Kopper, Tao Ni, Doug A Bowman, and Marcio Pinho. „Design andevaluation of navigation techniques for multiscale virtual environments“. In:
Virtual Reality Conference, 2006 . Ieee. 2006, pp. 175–182 (cit. on pp. 10, 42).[Kor+16] Martijn JL Kors, Gabriele Ferri, Erik D Van Der Spek, Cas Ketel, and Ben AMSchouten. „A breathtaking journey. On the design of an empathy-arousingmixed-reality game“. In:
Proceedings of the 2016 Annual Symposium onComputer-Human Interaction in Play . ACM. 2016, pp. 91–104 (cit. on p. 28).
Bibliography Kra+06] Andrea Kratz, Markus Hadwiger, Anton Fuhrmann, Rainer Splechtna, andKatja Bühler. „GPU-based high-quality volume rendering for virtual environ-ments“. In:
International Workshop on Augmented Environments for MedicalImaging and Computer Aided Surgery (AMI-ARCS) . Vol. 2006. Citeseer. 2006(cit. on pp. 33, 38).[Kre+16] Andrey Krekhov, Jürgen Grüninger, Kevin Baum, David McCann, and JensKrüger. „MorphableUI: A Hypergraph-Based Approach to Distributed Multi-modal Interaction for Rapid Prototyping and Changing Environments“. In:
Proceedings of The 24th International Conference in Central Europe on Com-puter Graphics, Visualization and Computer Vision 2016 in co-operation withEUROGRAPHICS . WSCG ’16. Plzen, Czech Republic: Václav Skala-UNIONAgency, 2016, pp. 299–308 (cit. on pp. 17, 22).[Kre+17a] Andrey Krekhov, Katharina Emmerich, Maxim Babinski, and Jens Krüger.„Gestures From the Point of View of an Audience: Towards AnticipatableInteraction of Presenters With 3D Content.“ In:
Proceedings of the 2017 CHIConference on Human Factors in Computing Systems . CHI ’17. Denver, Colorado,USA: ACM, 2017, pp. 5284–5294 (cit. on pp. 17, 20).[Kre+17b] Andrey Krekhov, Katharina Emmerich, Philipp Bergmann, Sebastian Cmen-towski, and Jens Krüger. „Self-Transforming Controllers for Virtual RealityFirst Person Shooters“. In:
Proceedings of the Annual Symposium on Computer-Human Interaction in Play . CHI PLAY ’17. Amsterdam, The Netherlands: ACM,2017, pp. 517–529 (cit. on p. 17).[Kre+18] Andrey Krekhov, Sebastian Cmentowski, Katharina Emmerich, Maic Masuch,and Jens Krüger. „GulliVR: A Walking-Oriented Technique for Navigation inVirtual Reality Games Based on Virtual Body Resizing“. In:
Proceedings of the2018 Annual Symposium on Computer-Human Interaction in Play . CHI PLAY’18. Melbourne, VIC, Australia: ACM, 2018, pp. 243–256 (cit. on pp. 7, 11).[Kre+19a] A. Krekhov, S. Cmentowski, A. Waschk, and J. Krüger. „Deadeye Visualiza-tion Revisited: Investigation of Preattentiveness and Applicability in VirtualEnvironments“. In:
IEEE Transactions on Visualization and Computer Graphics (2019), pp. 1–1 (cit. on pp. 27, 33, 37).[Kre+19b] Andrey Krekhov, Sebastian Cmentowski, Katharina Emmerich, and JensKrüger. „Beyond Human: Animals As an Escape from Stereotype Avatars inVirtual Reality Games“. In:
Proceedings of the Annual Symposium on Computer-Human Interaction in Play . CHI PLAY ’19. Barcelona, Spain: ACM, 2019,pp. 439–451 (cit. on pp. 27, 31).[KRR10] Werner A. König, Roman Rädle, and Harald Reiterer. „Interactive Design ofMultimodal User Interfaces - Reducing technical and visual complexity“. In:
Journal on Multimodal User Interfaces
IEEE Transactions on Visualization and Computer Graphics Bibliography
Lam+09] Marc Lambooij, Marten Fortuin, Ingrid Heynderickx, and Wijnand IJsselsteijn.„Visual discomfort and visual fatigue of stereoscopic displays: A review“. In:
Journal of Imaging Science and Technology
ACM SIGCHI Bulletin
Journal of Computer-Mediated Communication
Science
Virtual Reality, 2002.Proceedings. IEEE . IEEE. 2002, pp. 164–171 (cit. on p. 8).[LJ16] Lorraine Lin and Sophie Jörg. „Need a hand?: how appearance affects thevirtual hand illusion“. In:
Proceedings of the ACM Symposium on AppliedPerception . ACM. 2016, pp. 69–76 (cit. on p. 28).[LKK16] Sukwon Lee, Sung-Hee Kim, and Bum Chul Kwon. „Vlat: Development of avisualization literacy assessment test“. In:
IEEE transactions on visualizationand computer graphics
Proceedings of the 25th InternationalConference on Artificial Reality and Telexistence and 20th Eurographics Sym-posium on Virtual Environments . Eurographics Association. 2015, pp. 1–8(cit. on pp. 27, 28).[LLS96] Nikos K Logothetis, David A Leopold, and David L Sheinberg. „What is rivallingduring binocular rivalry?“ In:
Nature
Dear Data . Chronicle Books, 2016 (cit. onp. 33).[LRC12] Jason Lankow, Josh Ritchie, and Ross Crooks.
Infographics: The power ofvisual storytelling . John Wiley & Sons, 2012 (cit. on p. 33).[Lug+16] Jean-Luc Lugrin, Ivan Polyschev, Daniel Roth, and Marc Erich Latoschik.„Avatar anthropomorphism and acrophobia“. In:
Proceedings of the 22nd ACMConference on Virtual Reality Software and Technology . ACM. 2016, pp. 315–316 (cit. on p. 28).[M+76] David Marr, Tomaso Poggio, et al. „Cooperative computation of stereo dis-parity“. In:
From the Retina to the Neocortex (1976), pp. 239–243 (cit. onp. 38).[McC+15] Morgan McCullough, Hong Xu, Joel Michelson, et al. „Myo arm: swinging toexplore a VE“. In:
Proceedings of the ACM SIGGRAPH Symposium on AppliedPerception . ACM. 2015, pp. 107–113.
Bibliography MCR90] Jock Mackinlay, Stuart K. Card, and George G. Robertson. „A semantic analysisof the design space of input devices“. In:
Hum.-Comput. Interact.
Proceedings of the22nd ACM Conference on Virtual Reality Software and Technology . ACM. 2016,pp. 327–328 (cit. on p. 9).[MGM93] David Maulsby, Saul Greenberg, and Richard Mander. „Prototyping an In-telligent Agent Through Wizard of Oz“. In:
Proceedings of the INTERACT ’93and CHI ’93 Conference on Human Factors in Computing Systems . CHI ’93.Amsterdam, The Netherlands: ACM, 1993, pp. 277–284 (cit. on p. 21).[Mic+04] G. Michelitsch, J. Williams, M. Osen, B. Jimenez, and S. Rapp. „HapticChameleon: A New Concept of Shape-changing User Interface Controls withForce Feedback“. In:
CHI ’04 Extended Abstracts on Human Factors in Comput-ing Systems . CHI EA ’04. Vienna, Austria: ACM, 2004, pp. 1305–1308 (cit. onp. 18).[Mon70] K E Money. „Motion sickness.“ In:
Physiological Reviews
Proceedings of the Royal Society of London. Series B. BiologicalSciences
Frontiers in human neuroscience
Extended Abstracts Publication of the AnnualSymposium on Computer-Human Interaction in Play . ACM. 2017, pp. 599–605(cit. on p. 28).[Nak+16] Ken Nakagaki, Pasquale Totaro, Jim Peraino, et al. „HydroMorph: ShapeChanging Water Membrane for Display and Interaction“. In:
Proceedings of theTEI ’16: Tenth International Conference on Tangible, Embedded, and EmbodiedInteraction . TEI ’16. Eindhoven, Netherlands: ACM, 2016, pp. 512–517 (cit.on p. 18).[Nar+11] Takuji Narumi, Shinya Nishizaka, Takashi Kajinami, Tomohiro Tanikawa, andMichitaka Hirose. „Augmented reality flavors: gustatory display based onedible marker and cross-modal interaction“. In:
Proceedings of the SIGCHIconference on human factors in computing systems . ACM. 2011, pp. 93–102(cit. on p. 4).[Nor+11] Jean-Marie Normand, Elias Giannopoulos, Bernhard Spanlang, and Mel Slater.„Multisensory stimulation can induce an illusion of larger belly size in immer-sive virtual reality“. In:
PloS one Bibliography
Nov+19] Johannes Novotny, Joshua Tveite, Morgan L Turner, et al. „Developing VirtualReality Visualizations for Unsteady Flow Analysis of Dinosaur Track Formationusing Scientific Sketching“. In:
IEEE transactions on visualization and computergraphics
Vision Research
Nature
Proceedings of the 8th InternationalConference on Tangible, Embedded and Embodied Interaction . TEI ’14. Munich,Germany: ACM, 2013, pp. 49–52 (cit. on p. 18).[OBL07] Jan Ohlenburg, Wolfgang Broll, and Irma Lindt. „DEVAL: a device abstractionlayer for VR/AR“. In:
Proceedings of the 4th international conference on Uni-versal access in human computer interaction: coping with diversity . UAHCI’07.Beijing, China: Springer-Verlag, 2007, pp. 497–506 (cit. on p. 22).[Ohy+07] Seizo Ohyama, Suetaka Nishiike, Hiroshi Watanabe, et al. „Autonomic re-sponses during motion sickness induced by virtual reality“. In:
Auris NasusLarynx
Attention,Perception, & Psychophysics
PloS one
Con-sciousness and cognition
Attention, Perception, & Psychophysics
IFIP Conference onHuman-Computer Interaction . Springer. 2013, pp. 282–299.[PN18] Daniela Pusca and Derek O. Northwood. „Design thinking and its applicationto problem solving“. In:
Global Journal of Engineering Education
Cognitive neurodynamics
Redirected walking . University of North Carolina at ChapelHill, 2005.
Bibliography RB75] James T Reason and Joseph John Brand.
Motion sickness.
Academic press,1975 (cit. on p. 8).[Ren+15] Rebekka S Renner, Erik Steindecker, Mathias MüLler, et al. „The influenceof the stereo base on blind and sighted reaches in a virtual environment“.In:
ACM Transactions on Applied Perception (TAP)
IEEE Transactions onVisualization & Computer Graphics
Proceedings of EUROGRAPHICS . Vol. 9. Citeseer. 2001, pp. 105–106.[RL06] Roy A Ruddle and Simon Lessels. „For efficient navigational search, humansrequire full physical movement, but not a rich visual scene“. In:
PsychologicalScience
ACM Transactions on Computer-HumanInteraction (TOCHI)
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems . CHI ’11. Vancouver, BC, Canada: ACM, 2011,pp. 197–206 (cit. on p. 22).[Rot+17] Daniel Roth, Jean-Luc Lugrin, Marc Erich Latoschik, and Stephan Huber.„Alpha IVBO-construction of a scale to measure the illusion of virtual bodyownership“. In:
Proceedings of the 2017 CHI Conference Extended Abstracts onHuman Factors in Computing Systems . ACM. 2017, pp. 2875–2883 (cit. onp. 29).[RS12] Rim Razzouk and Valerie Shute. „What Is Design Thinking and Why Is ItImportant?“ In:
Review of Educational Research https://doi.org/10.3102/0034654312457429 (cit. on p. 33).[RVB11] Roy A Ruddle, Ekaterina Volkova, and Heinrich H Bülthoff. „Walking improvesyour cognitive map in environments that are large-scale and large in extent“.In:
ACM Transactions on Computer-Human Interaction (TOCHI)
Interacting with Pres-ence: HCI and the Sense of Presence in Computer-mediated Environments . Walterde Gruyter GmbH & Co KG, 2014 (cit. on p. 28).[San+10] Maria V Sanchez-Vives, Bernhard Spanlang, Antonio Frisoli, Massimo Berga-masco, and Mel Slater. „Virtual hand illusion induced by visuomotor correla-tions“. In:
PloS one
Proceedings of the2007 ACM symposium on Virtual reality software and technology . ACM. 2007,pp. 121–124 (cit. on p. 37). Bibliography
SBN08] Rui Shen, Pierre Boulanger, and Michelle Noga. „Medvis: A real-time immer-sive visualization environment for the exploration of medical volumetric data“.In: . IEEE. 2008, pp. 63–68(cit. on pp. 33, 38).[SC02] William R Sherman and Alan B Craig.
Understanding virtual reality: Interface,application, and design . Elsevier, 2002 (cit. on p. 7).[SC93] Daniel Salber and Joëlle Coutaz. „Human-Computer Interaction: Third Inter-national Conference, EWHCI ’93 Moscow, Russia, August 3–7, 1993 SelectedPapers“. In: Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. Chap. Apply-ing the Wizard of Oz technique to the study of multimodal systems, pp. 219–230 (cit. on p. 21).[Sch+11] Christophe Scholliers, Lode Hoste, Beat Signer, and Wolfgang De Meuter.„Midas: A Declarative Multi-touch Interaction Framework“. In:
Proceedingsof the Fifth International Conference on Tangible, Embedded, and EmbodiedInteraction . TEI ’11. Funchal, Portugal: ACM, 2011, pp. 49–56 (cit. on p. 22).[Sch03] Thomas W. Schubert. „The sense of presence in virtual environments: A three-component scale measuring spatial presence, involvement, and realness“. In:
Zeitschrift für Medienpsychologie
Proceedings of the Human Factors and ErgonomicsSociety annual meeting . Vol. 41. 2. SAGE Publications Sage CA: Los Angeles,CA. 1997, pp. 1138–1142 (cit. on p. 8).[Sla+08] Mel Slater, Daniel Pérez Marcos, Henrik Ehrsson, and Maria V Sanchez-Vives.„Towards a digital body: the virtual arm illusion“. In:
Frontiers in humanneuroscience
Frontiers in neuroscience
PloS one
Presence connect
Igroup PresenceQuestionnaire (IPQ) . 2018 (cit. on p. 7).[SSN88] Shinsuke Shimojo, Gerald H Silverman, and Ken Nakayama. „An occlusion-related mechanism of depth perception based on motion and interocularsequence“. In:
Nature
IEEE Transactions on Visualization& Computer Graphics
Bibliography Ste+10] F. Steinicke, G. Bruder, J. Jerald, H. Frenz, and M. Lappe. „Estimation of De-tection Thresholds for Redirected Walking Techniques“. In:
IEEE Transactionson Visualization and Computer Graphics
Proceedings of the 15th International Conference on Human-computer In-teraction with Mobile Devices and Services . MobileHCI ’13. Munich, Germany:ACM, 2013, pp. 123–126 (cit. on p. 18).[STV06] Patrick Salamin, Daniel Thalmann, and Frédéric Vexo. „The benefits of third-person perspective in virtual and augmented reality?“ In:
Proceedings of theACM symposium on Virtual reality software and technology . ACM. 2006, pp. 27–30 (cit. on p. 13).[Suh+02] Bongwon Suh, Allison Woodruff, Ruth Rosenholtz, and Alyssa Glass. „PopoutPrism: Adding Perceptual Principles to Overview+Detail Document Inter-faces“. In:
Proceedings of the SIGCHI Conference on Human Factors in Comput-ing Systems . CHI ’02. Minneapolis, Minnesota, USA: ACM, 2002, pp. 251–258(cit. on p. 35).[SUS95] Mel Slater, Martin Usoh, and Anthony Steed. „Taking steps: the influence ofa walking technique on presence in virtual reality“. In:
ACM Transactions onComputer-Human Interaction (TOCHI)
Proceedings of the 33rd Annual ACM Conference on Human Factors inComputing Systems . CHI ’15. Seoul, Republic of Korea: ACM, 2015, pp. 3307–3316 (cit. on p. 17).[Tan+10] Kar-Han Tan, Dan Gelb, Ramin Samadani, et al. „Gaze Awareness and Interac-tion Support in Presentations“. In:
Proceedings of the 18th ACM InternationalConference on Multimedia . MM ’10. Firenze, Italy: ACM, 2010, pp. 643–646(cit. on p. 21).[Tay+01] Russell M. Taylor II, Thomas C. Hudson, Adam Seeger, et al. „VRPN: a device-independent, network-transparent VR peripheral system“. In:
Proceedingsof the ACM symposium on Virtual reality software and technology . VRST ’01.Baniff, Alberta, Canada: ACM, 2001, pp. 55–61 (cit. on pp. 22, 24).[TDS99] James N Templeman, Patricia S Denbrook, and Linda E Sibert. „Virtual lo-comotion: Walking in place through virtual environments“. In:
Presence
Unity . Website. Retrieved March 29, 2018 from https://unity3d.com/ . 2018 (cit. on p. 3).[TG67] Davida Y Teller and Eugene Galanter. „Brightnesses, luminances, and Fech-ner’s paradox“. In:
Perception & Psychophysics
Cognitive psychology Bibliography
TG88] Anne Treisman and Stephen Gormican. „Feature analysis in early vision:Evidence from search asymmetries.“ In:
Psychological review
Journal of Experimental Psy-chology: Human Perception and Performance
Anthrozoös
Journal ofExperimental Psychology: Human Perception and Performance
Neuropsychologia
Journal of Vision
Proceedings of the 26th annual con-ference on Computer graphics and interactive techniques . ACM Press/Addison-Wesley Publishing Co. 1999, pp. 359–364 (cit. on p. 10).[VGH12] Dimitar Valkov, Alexander Giesler, and Klaus H. Hinrichs. „VINS - SharedMemory Space for Definition of Interactive Techniques“. In:
ACM Symposiumon Virtual Reality Software and Technology (VRST 2012) . ACM, 2012, pp. 145–153 (cit. on p. 22).[VKE16] Sebastian Von Mammen, Andreas Knote, and Sarah Edenhofer. „Cyber sickbut still having fun“. In:
Proceedings of the 22nd ACM Conference on VirtualReality Software and Technology . ACM. 2016, pp. 325–326 (cit. on p. 8).[Wal+14] Manuela Waldner, Mathieu Le Muzic, Matthias Bernhard, Werner Purgathofer,and Ivan Viola. „Attractive Flicker: Guiding Attention in Dynamic NarrativeVisualizations“. In:
IEEE Transactions on Visualization and Computer Graphics
IEEE transactionson visualization and computer graphics
Information visualization: perception for design . Elsevier, 2012(cit. on p. 35).[WCF89] Jeremy M Wolfe, Kyle R Cave, and Susan L Franzel. „Guided search: analternative to the feature integration model for visual search.“ In:
Journalof Experimental Psychology: Human perception and performance
Bibliography Weg+17] Konstantin Wegner, Sven Seele, Helmut Buhler, et al. „Comparison of twoinventory design concepts in a collaborative virtual reality serious game“. In:
Extended Abstracts Publication of the Annual Symposium on Computer-HumanInteraction in Play . ACM. New York, NY, USA: ACM, 2017, pp. 323–329 (cit.on p. 19).[WF88] Jeremy M Wolfe and Susan L Franzel. „Binocularity and visual search“. In:
Attention, Perception, & Psychophysics
IEEE transactions on systems, man, andcybernetics-part A: systems and humans
Human walking in virtual environments .Springer, 2013, pp. 3–26 (cit. on p. 10).[Whi+18] Eric Whitmire, Hrvoje Benko, Christian Holz, Eyal Ofek, and Mike Sinclair.„Haptic revolver: Touch, shear, texture, and shape rendering on a reconfig-urable virtual reality controller“. In:
Proceedings of the 2018 CHI Conferenceon Human Factors in Computing Systems . ACM. 2018, p. 86 (cit. on p. 43).[WHR99] Zachary Justin Wartell, Larry F Hodges, and William Ribarsky.
The analyticdistortion induced by false-eye separation in head-tracked stereoscopic displays .Tech. rep. Georgia Institute of Technology, 1999 (cit. on p. 11).[Wil+16] Preston Tunnell Wilson, William Kalescky, Ansel MacLaughlin, and BetsyWilliams. „VR locomotion: walking> walking in place> arm swinging“. In:
Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Contin-uum and Its Applications in Industry-Volume 1 . ACM. 2016, pp. 243–249.[WJS05] Bob G. Witmer, Christian J. Jerome, and Michael J. Singer. „The FactorStructure of the Presence Questionnaire“. In:
Presence: Teleoperators andVirtual Environments
Presence . IEEE. 2010, pp. 51–58.[Xia05] Dave Vronay Xiang Cao Eyal Ofek. „Evaluation of Alternative PresentationControl Techniques“. In:
ACM SIG CHI 2005 . ACM, Jan. 2005 (cit. on p. 21).[Yan+04] Yasuyuki Yanagida, Shinjiro Kawato, Haruo Noma, Akira Tomono, and NTesutani. „Projection based olfactory display with nose tracking“. In:
IEEEVirtual Reality 2004 . IEEE. 2004, pp. 43–50 (cit. on p. 4).[Yao+14] Richard Yao, Tom Heath, Aaron Davies, et al. „Oculus vr best practices guide“.In:
Oculus VR (2014), pp. 27–39 (cit. on p. 9).[Yar67] Alfred L Yarbus. „Eye movements during perception of complex objects“. In:
Eye movements and vision . Springer, 1967, pp. 171–211 (cit. on p. 35). Bibliography
YB07] Nick Yee and Jeremy Bailenson. „The Proteus effect: The effect of transformedself-representation on behavior“. In:
Human communication research
Games User Research: A Case Study Approach (2016),p. 145 (cit. on p. 17).[Zar+14] Alexander Zaranek, Bryan Ramoul, Hua Fei Yu, Yiyu Yao, and Robert J Teather.„Performance of modern gaming input devices in first-person shooter targetacquisition“. In:
Proceedings of the extended abstracts of the 32nd annual ACMconference on Human factors in computing systems . ACM. 2014, pp. 1495–1500(cit. on p. 17).[ZCZ12] Haimo Zhang, Xiang Cao, and Shengdong Zhao. „Beyond Stereo: An Explo-ration of Unconventional Binocular Presentation for Novel Visual Experience“.In:
Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems . CHI ’12. Austin, Texas, USA: ACM, 2012, pp. 2523–2526 (cit. onp. 36).[ZF02] Xiaolong Zhang and George W. Furnas. „Social Interactions in MultiscaleCVEs“. In:
Proceedings of the 4th International Conference on CollaborativeVirtual Environments . CVE ’02. ACM, 2002, pp. 31–38 (cit. on pp. 10, 42).[Zha08] Li Zhaoping. „Attention capture by eye of origin singletons even withoutawareness—A hallmark of a bottom-up saliency map in the primary visualcortex“. In:
Journal of Vision
IEEE Transactionson Visualization and Computer Graphics
Proceedingsof the 2019 CHI Conference on Human Factors in Computing Systems . ACM.New York, NY, USA: ACM, 2019, p. 211 (cit. on p. 43).[Zou+17] Bochao Zou, Igor S Utochkin, Yue Liu, and Jeremy M Wolfe. „Binocularityand visual search—Revisited“. In:
Attention, Perception, & Psychophysics
Bibliography eclaration I hereby declare that I have completed my work solely and only with the help of thereferences I mentioned.