Back to the Future: Revisiting Mouse and Keyboard Interaction for HMD-based Immersive Analytics
BBack to the Future: Revisiting Mouseand Keyboard Interaction forHMD-based Immersive Analytics
Jens Grubert
Coburg University of AppliedSciences and [email protected]
Eyal Ofek
Microsoft [email protected]
Michel Pahud
Microsoft [email protected]
Per Ola Kristensson
University of [email protected]
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).Copyright held by the owner/author(s).
CHI’20, , April 25–30, 2020, Honolulu, HI, USAACM 978-1-4503-6819-3/20/04.https://doi.org/10.1145/3334480.XXXXXXX
Abstract
With the rise of natural user interfaces, immersive analyt-ics applications often focus on novel forms of interactionmodalities such as mid-air gestures, gaze or tangible inter-action utilizing input devices such as depth-sensors, touchscreens and eye-trackers. At the same time, traditionalinput devices such as the physical keyboard and mouseare used to a lesser extent. We argue, that for certain workscenarios, such as conducting analytic tasks at stationarydesktop settings, it can be valuable to combine the bene-fits of novel and established input devices as well as inputmodalities to create productive immersive analytics environ-ments.
Author Keywords virtual reality; keyboard; mouse; immersive analytics; head-mounted displays
Introduction
The area of Immersive Analytics tries to remove barriersbetween data, people who analyze this data and the toolsthey use to do so [18]. Researchers combine knowledgefrom fields such as data visualization, human-computer in-teraction and mixed reality to create and study new toolsand approaches to engage with data. The rise of naturaluser interfaces as well as the introduction of affordable im-mersive head-mounted displays (HMDs) [9] led to a wide a r X i v : . [ c s . H C ] S e p ariety of interaction techniques for data and view spec-ification and manipulation [4, 12] including touch, spa-tial gestures, tangible and gaze interaction and a numberof archetypal setups such as large screen collaborativespaces (with or without personal displays such as tablets)or immersive setups (projection or head-mounted display-based) (for an overview we refer to BÃijschel et al. [7]).Specifically, HMD-based systems make heavy use of spa-tial gestures using bare hands or controllers but are typi-cally designed to support free-space interaction, assumingno interfering objects or humans nearby. While this allowsfor expressive, and potentially co-located interaction, freespace interaction comes at the cost of increased fatigue[13] or inaccurate input (e.g., when using hand or gaze-based ray casting techniques [6, 20]). While a number oftechniques have been proposed to facilitate object selectionin presence of clutter (e.g., [22]), to increase spatial point-ing accuracy [1, 14] or to mitigate fatigue of spatial gestures[11] they still do not eliminate those challenges.We argue, that the combination of desktop-based input de-vices such as the physical keyboard and mouse with im-mersive head-mounted displays can benefit single usersin immersive analytics tasks, similar to office-based knowl-edge work [10, 8] or the use of hybrid 2D/3D interaction inmedicine [16]. Figure 1:
Interaction withhead-mounted display, keyboardand mouse.
Figure 2:
Top: 3D pointing withone hand, selection confirmationvia mouse press. Bottom:Transition from mid-air pointing tokey press.
Keyboard and Mouse for HMD-based ImmersiveAnalytics
The physical keyboard and mouse are optimized for sym-bolic and precise 2D input and have a long tradition in beingused as standard input devices in desktop environments.While not free from challenges, they have been optimized tosupport long hours of work [5, 27]. The keyboard was designed for rapid entrance of symbolicinformation, and although it may not be the best mecha-nism developed for the task, its familiarity that enabled goodperformance by users without considerable learning effortskept it almost unchanged for many years. However, wheninteracting with spatial data, they are perceived as fallingshort of providing efficient input capabilities [3], even thoughthey are successfully used in many 3D environments (suchas CAD or gaming [23]), can be modified to to allow 3Dinteraction [26, 19] or can outperform 3D input devices inspecific tasks such as 3D object placement [2, 24].With the advent of self-contained immersive head-mounteddisplays, which allow for spatial tracking of the environmentand the users hand, as well as eye-tracking, there is a po-tential to efficiently utilize keyboard and mouse interactionin single user, desktop-based environments (see Figure1) for immersive analytics tasks. For example, Wang et al.[25] explored the use of an Augmented Reality extension toa desktop-based analytics environment. Specifically, theyadded a stereoscopic data view using a HoloLens to a tra-ditional 2D desktop environment and interacted with key-board and mouse across both the HoloLens and the desk-top. Furthermore, the ability of immersive near eye displaysto modify the visual representations of keyboard and mouseenhance their flexibility allows for application-specific adap-tations [21].Along this research trajectory, we see the following aspectsapplicable to immersive analytics using virtual reality orvideo see-through-based augmented reality.
Complementary and Multi-modal Input
So far, problems in switching between spatial interaction(e.g., using controllers) and keyboard and mouse inter-action have limited the applicability of desktop-based in-put devices for immersive analytics. Even in stationary,esktop-based scenarios it might be challenging to switchfrom motion-tracked controllers to keyboard and mouse de-vices. However, given the possibility to spatially track theusers hands and the keyboard and mouse through model-based tracking [15, 17] applicable to today’s HMDs withcamera-based inside-out tracking, we see the potential toseamlessly switch between mid-air interaction and mouseor keyboard input, see Figure 2. This could open up effi-cient switching between tasks (e.g., selecting 3D aroundthe user through spatial gestures and changing data prop-erties through symbolic input on the keyboard) or subse-quent fine-grained selection on a 2D subspace of the datausing the mouse. Further, the input devices can be com-bined for multi-modal interaction. For example, one handcould be used for (uncertain) data selection again, while theother hand could be used for certain action confirmation,e.g., through key press on the physical keyboard, or alter-natively for moving the data views around the user - insteadof having the user navigate through the virtual scene.
Figure 3:
Color scale mapped tokeyboard keys. Color selectioncould be interpolated by pressingtwo buttons at once.
Augmenting peripherals
Virtual data entities can also be augmented on or aroundthe keyboard and mouse to allow for direct interaction withthose virtual data items [21]. For example, in a node-linkdiagram, individual nodes could be associated to individ-ual keys to allow quick selection of individual nodes (i.e.one key is mapped to one data entity), to multiple keys e.g.,when only few nodes are present, or a single key could rep-resent multiple nodes (e.g. in a dense node-link diagramwith many nodes). Similarly, user interface elements formanipulating object properties, such as sliders could bemapped to multiple keys on the keyboard, to the mouse-wheel or to the area around the mouse. Also, differentareas on a physical mouse with touch sensitive surfacescould have different semantics. Again, the advantage ofmapping these graphical elements to the physical input de- vices lies in the increased certainty of the input (e.g., keypress, moving the mouse over a physical surface) in con-trast to uncertain mid-air or gaze-based input. In addition,a spatially tracked mouse could be utilized to enable con-strained 3D object manipulations such as rotations or scal-ing.
Conclusion and Future Work
Through this position paper, we aim at increasing the aware-ness about the potential that traditional desktop-based in-put devices such as the physical keyboard and mouse canbring into immersive analytics tasks. It lies in the combina-tion of certain but (in terms of degrees of freedom) spatiallylimited input of those devices with expressive but uncertainand fatiguing spatial input, as well as the ability to virtuallyaugment keyboard and mouse for enhanced interaction inimmersive analytics tasks. In future work, we aim at inves-tigating specific immersive analytics tasks and at studyingthe opportunities of multi-modal interaction between spatialand keyboard and mouse-based interaction in more detail.Finally, we will also explore the opportunities of integrat-ing stationary touch-screens (e.g. integrated in laptops) forimmersive analytics tasks.
REFERENCES [1] Ferran Argelaguet and Carlos Andujar. 2013. A surveyof 3D object selection techniques for virtualenvironments.
Computers & Graphics
37, 3 (2013),121–136.[2] François Bérard, Jessica Ip, Mitchel Benovoy, DaliaEl-Shimy, Jeffrey R Blum, and Jeremy R Cooperstock.2009. Did â ˘AIJMinority Reportâ ˘A˙I get it wrong?Superiority of the mouse over 3D input devices in a 3Dplacement task. In
IFIP Conference onHuman-Computer Interaction . Springer, 400–414.3] Lonni Besançon, Paul Issartel, Mehdi Ammi, andTobias Isenberg. 2017. Mouse, tactile, and tangibleinput for 3D manipulation. In
Proceedings of the 2017CHI Conference on Human Factors in ComputingSystems . 4727–4740.[4] Doug Bowman, Ernst Kruijff, Joseph J LaViola Jr, andIvan P Poupyrev. 2004.
3D User interfaces: theory andpractice, CourseSmart eTextbook . Addison-Wesley.[5] Jay L Brand. 2008. Office ergonomics: A review ofpertinent research and recent developments.
Reviewsof human factors and ergonomics
4, 1 (2008),245–282.[6] Michelle A Brown, Wolfgang Stuerzlinger, andEJ Mendonça Filho. 2014. The performance ofun-instrumented in-air pointing. In
Proceedings ofGraphics Interface 2014 . Citeseer, 59–66.[7] Wolfgang Büschel, Jian Chen, Raimund Dachselt,Steven Drucker, Tim Dwyer, Carsten Görg, TobiasIsenberg, Andreas Kerren, Chris North, and WolfgangStuerzlinger. 2018. Interaction for immersive analytics.In
Immersive Analytics . Springer, 95–138.[8] Citigroup. 2016 (accessed March 31, 2020).
CitiHoloLens Holographic Workstation . [9] Grégoire Cliquet, Matthieu Perreira, FabienPicarougne, Yannick Prié, and Toinon Vigier. 2017.Towards hmd-based immersive analytics.[10] Jens Grubert, Eyal Ofek, Michel Pahud, Per OlaKristensson, Frank Steinicke, and Christian Sandor.2018. The office of the future: Virtual, portable, andglobal. IEEE computer graphics and applications
38, 6(2018), 125–133. [11] Jeffrey T Hansberger, Chao Peng, Shannon L Mathis,Vaidyanath Areyur Shanthakumar, Sarah C Meacham,Lizhou Cao, and Victoria R Blakely. 2017. Dispellingthe gorilla arm syndrome: the viability of prolongedgesture interactions. In
International Conference onVirtual, Augmented and Mixed Reality . Springer,505–520.[12] Jeffrey Heer and Ben Shneiderman. 2012. Interactivedynamics for visual analysis.
Queue
10, 2 (2012),30–55.[13] Juan David Hincapié-Ramos, Xiang Guo, PaymahnMoghadasian, and Pourang Irani. 2014. ConsumedEndurance: A Metric to Quantify Arm Fatigue ofMid-Air Interactions. In
Proceedings of the SIGCHIConference on Human Factors in Computing Systems(CHI â ˘A ´Z14) . Association for Computing Machinery,New York, NY, USA, 1063â ˘A ¸S1072.
DOI: http://dx.doi.org/10.1145/2556288.2557130 [14] Mikko Kytö, Barrett Ens, Thammathip Piumsomboon,Gun A Lee, and Mark Billinghurst. 2018. Pinpointing:Precise head-and eye-based target selection foraugmented reality. In
Proceedings of the 2018 CHIConference on Human Factors in Computing Systems .1–14.[15] Vincent Lepetit, Pascal Fua, and others. 2005.Monocular model-based 3d tracking of rigid objects: Asurvey.
Foundations and Trends® in ComputerGraphics and Vision
1, 1 (2005), 1–89.[16] Veera Bhadra Harish Mandalika, Alexander IChernoglazov, Mark Billinghurst, Christoph Bartneck,Michael A Hurrell, Niels De Ruiter, Anthony PH Butler,and Philip H Butler. 2018. A hybrid 2D/3D userinterface for radiological diagnosis.
Journal of digitalimaging
31, 1 (2018), 56–73.17] Eric Marchand, Hideaki Uchiyama, and FabienSpindler. 2015. Pose estimation for augmented reality:a hands-on survey.
IEEE transactions on visualizationand computer graphics
22, 12 (2015), 2633–2651.[18] Kim Marriott, Falk Schreiber, Tim Dwyer, KarstenKlein, Nathalie Henry Riche, Takayuki Itoh, WolfgangStuerzlinger, and Bruce H Thomas. 2018.
ImmersiveAnalytics . Vol. 11190. Springer.[19] Gary Perelman, Marcos Serrano, Mathieu Raynal,Celia Picard, Mustapha Derras, and EmmanuelDubois. 2015. The roly-poly mouse: Designing arolling input device unifying 2d and 3d interaction. In
Proceedings of the 33rd Annual ACM Conference onHuman Factors in Computing Systems . 327–336.[20] Yuan Yuan Qian and Robert J Teather. 2017. The eyesdon’t have it: an empirical comparison of head-basedand eye-based selection in virtual reality. In
Proceedings of the 5th Symposium on Spatial UserInteraction . 91–98.[21] Daniel Schneider, Alexander Otte, Travis Gesslein,Philipp Gagel, Bastian Kuth, Mohamad ShahmDamlakhi, Oliver Dietz, Eyal Ofek, Michel Pahud,Per Ola Kristensson, JÃ ˝urg MÃijller, and Jens Grubert.2019. Reconviguration: Reconfiguring physicalkeyboards in virtual reality.
IEEE transactions onvisualization and computer graphics
25, 11 (2019),3190–3201. [22] Ludwig Sidenmark, Christopher Clarke, XuesongZhang, Jenny Phu, and Hans Gellersen. 2020. OutlinePursuits: Gaze-assisted Selection of Occluded Objectsin Virtual Reality. (2020).[23] Wolfgang Stuerzlinger and Chadwick A Wingrave.2011. The value of constraints for 3D user interfaces.In
Virtual Realities . Springer, 203–223.[24] Junwei Sun, Wolfgang Stuerzlinger, and Bernhard ERiecke. 2018. Comparing input methods and cursorsfor 3D positioning with head-mounted displays. In
Proceedings of the 15th ACM Symposium on AppliedPerception . 1–8.[25] Xiyao Wang, Lonni Besançon, David Rousseau,Mickael Sereno, Mehdi Ammi, and Tobias Isenberg.2020. Towards an Understanding of AugmentedReality Extensions for Existing 3D Data Analysis Tools.In
ACM Conference on Human Factors in ComputingSystems .[26] Colin Ware and Kathy Lowther. 1997. Selection usinga one-eyed cursor in a fish tank VR environment.
ACMTransactions on Computer-Human Interaction (TOCHI)
4, 4 (1997), 309–322.[27] EHC Woo, Peter White, and CWK Lai. 2016.Ergonomics standards and guidelines for computerworkstation design and the impact on usersâ ˘A ´Zhealth–a review.