User interface for in-vehicle systems with on-wheel finger spreading gestures and head-up displays
JJournal of Computational Design and Engineering , 2020, 7(6), 700–721 doi: 10.1093/jcde/qwaa052
R E S E A R C H A R T I C L E
User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Sang Hun Lee * and Se-One Yoon
Graduate School of Automotive Engineering, Kookmin University, Seoul 02707, Republic of Korea *Corresponding author. E-mail: [email protected] http://orcid.org/0000-0001-8888-2201
Abstract
Interacting with an in-vehicle system through a central console is known to induce visual and biomechanical distractions,thereby delaying the danger recognition and response times of the driver and signi fi cantly increasing the risk of anaccident. To address this problem, various hand gestures have been developed. Although such gestures can reduce visualdemand, they are limited in number, lack passive feedback, and can be vague and imprecise, dif fi cult to understand andremember, and culture-bound. To overcome these limitations, we developed a novel on-wheel fi nger spreading gesturalinterface combined with a head-up display (HUD) allowing the user to choose a menu displayed in the HUD with a gesture.This interface displays audio and air conditioning functions on the central console of a HUD and enables their control usinga speci fi c number of fi ngers while keeping both hands on the steering wheel. We compared the effectiveness of the newlyproposed hybrid interface against a traditional tactile interface for a central console using objective measurements andsubjective evaluations regarding both the vehicle and driver behaviour. A total of 32 subjects were recruited to conductexperiments on a driving simulator equipped with the proposed interface under various scenarios. The results showed thatthe proposed interface was approximately 20% faster in emergency response than the traditional interface, whereas itsperformance in maintaining vehicle speed and lane was not signi fi cantly different from that of the traditional one. Keywords: user interface; in-vehicle system; hand gesture; head-up display; human–vehicle interaction; driver distraction
1. Introduction
Driving is a complex task, usually requiring the complete at-tention resources of the driver, and thus performing other ac-tivities simultaneously may lead to a major decline in drivingperformance. The National Highway Traf fi c Safety Administra-tion (NHTSA) classi fi es driver distractions into four types: visual,biomechanical, auditory, and cognitive (Ranney, Garrott, & Good-man, 2001). Visual distractions occur when a driver is lookingelsewhere and therefore not paying full attention to the road.Biomechanical distractions occur when the driver uses theirbody for tasks other than driving, such as drinking, smoking, orinteracting with in-vehicle systems. Auditory distractions occurwhen the driver uses devices such as smartphones, listens to theradio, or talks to a passenger. Finally, cognitive distractions oc- cur when the driver is thinking about not only the road but alsoother things that might affect their driving (e.g. children runninginto the road chasing after a ball).According to the NHTSA traf fi c accident database, 25–30% ofall traf fi c accidents and 78% of collision accidents that occur inthe United States are caused by driver distraction (Stutts, Rein-furt, Staplin, & Rodgman, 2001; Neale, Dingus, Klauer, Sudweeks,& Goodman, 2005). Actual vehicle driver data show that driverstaking their eyes off the road for more than 2 s increase theprobability of an accident, whereas performing complex taskstriples the risk of a collision (Llaneras, 2000). Approximately50% of accidents from driver distractions are caused by the useof smartphones and in-vehicle infotainment systems, such asnavigational systems (Llaneras, 2000; Stutts et al., 2001). Tasksthat require the direct use of the hands while driving, such as Received:
24 December 2019;
Revised:
30 May 2020;
Accepted:
30 May 2020 C ! The Author(s) 2020. Published by Oxford University Press on behalf of the Society for Computational Design and Engineering. This is an Open Accessarticle distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permitsunrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
30 May 2020 C ! The Author(s) 2020. Published by Oxford University Press on behalf of the Society for Computational Design and Engineering. This is an Open Accessarticle distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permitsunrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 touching a screen or pushing a button on the central console,are major factors in traf fi c accidents (Klauer, Dingus, Neale,Sudweeks, & Ramsey, 2006). An investigation conducted bythe American Automobile Association in 2006, which classi fi eddistraction-inducing activities into various types and analysedthem according to their weighted contributions to driver dis-traction, showed that tasks such as turning on or controlling anaudio device accounted for the highest proportion of distractingactivities (72.5%) (Stutts et al., 2003). Thus, statistical analyses ofthe causes of traf fi c accidents indicate a need to consider humanfactors when designing in-vehicle systems in order to minimizetheir potential for distraction and the possibility of contributingto an accident.New types of in-vehicle control systems that support naturalinteractions between humans and their devices are being devel-oped (Hong & Woo, 2008). In-vehicle system controls can be clas-si fi ed into four types: tactile, touchscreen, speech, and gesturecontrol (Bach, Jaeger, Skov, & Thomassen, 2008). Traditional tac-tile controls include push buttons and rotary switches, whereasmore recent forms of tangible control use touchscreens, whichmay cause signi fi cant visual and biomechanical distractions ifthey are located away from the driver’s line of sight and hand po-sition, for example, on the central console. Research has shownthat touchscreen controls cause more severe visual and cogni-tive distractions than traditional tactile controls (Tsimhoni &Green, 2001; Noy, Lemoine, Klachan, & Burns, 2004; Bellotti, DeGloria, Montanari, Dosio, & Morreale, 2005). Speech control is themost natural form of control and causes little visual and biome-chanical distraction because it facilitates hands-free and eye-attention-free interaction; however, it can contribute to auditorydistractions (Gellatly, 1997; Bar ´on & Green, 2006). Gesture controlis considered an alternative to speech control and has the poten-tial to overcome some of the weaknesses of the latter. As withspeech control, gesture control is natural and can reduce visualdistractions; however, it may cause biomechanical distractions,and the types of systems using such control are currently lim-ited.To overcome the problems of gesture control, it is neces-sary to develop gestures that allow the driver to keep theirhands on the steering wheel and their eyes on the road (Werner,2014). The Geremin system sensed 17 distinct index fi nger move-ments away from the wheel as gestures (Endres, Schwartz, &M ¨uller, 2011). The WheelSense system used an ergonomic anal-ysis of hand positions on the steering wheel to evaluate fourdifferent wheel grip gestures: two forms of rotation, dragging,and squeezing (Angelini et al., 2013). Several research groups(Gonz´alez, Wobbrock, Chau, Faulring, & Myers, 2007; D ¨oring etal., 2011; Ulrich et al., 2013) have used different variations ofthumb-based gestures on a touch-sensitive surface on the steer-ing wheel with small sets of gestures (4 to 19) to allow gestureinteractions while holding the steering wheel with both hands.All on-wheel gesture interactions rely on a small number of dis-tinguishable thumb or fi nger gestures as their main interactiontechnique. Consequently, they impose several limitations on theuser relating to their limitation in number, vague and impre-cise meaning, dif fi culty in remembering, and the lack of passivefeedback, i.e. feedback on the current status of the systems andvariable values.Combining gestures with different types of feedback maybe a solution to overcome those limitations and could createa powerful and diverse in-vehicle interaction platform. Whentouchscreens are used as a gesture input, they frequently de-mand visual feedback, although eyes-free gestures on touch-screens can be facilitated by also providing auditory feedback (Bach et al., 2008), as explored by Angelini et al. (2013). An-other common approach for ‘eyes on the road’ gestures is pro-viding visual feedback with a head-up display (HUD) (Koyamaet al., 2014). Numerous studies have illustrated the bene fi ts ofHUDs over the head-down displays (HDDs) for the presenta-tion of information related to the operation of the in-vehiclesystem as well as the vehicle itself (Sojourner & Antin, 1990;Kiefer, 1998; Liu & Wen, 2004; Prinzel & Risser, 2004; Weinberg,Harsham, & Medenica, 2011; Lauber, Follmann, & Butz, 2014;Skrypchuk, Langdon, Sawyer, & Clarkson, 2020). They have iden-ti fi ed shorter display–road transition and eye accommodationtimes (Sojourner & Antin, 1990; Kiefer, 1998) and found the HUD-based interface had a low impact on mental load and scoredhighest in user satisfaction (Weinberg et al., 2011) in compar-ison to HDDs. However, there are also problems with HUDsas summarized in Prinzel and Risser (2004), mainly known ascognitive capture, attention capture, or perceptual tunnelling.To upgrade the ef fi ciency and safety, a HUD system must pro-vide drivers with not only large amounts of information frommany categories (e.g. route guidance/navigation, traf fi c signs,cargo/road/vehicle conditions) but also the best way to displaythis information; important considerations include having auser-friendly system, since a driver’s capacity to process this in-formation is a key factor in its acceptance and use (Liu & Wen,2004).In this work, we developed a new form of gesture controlwhere a speci fi c number of fi ngers of the left and/or right handare opened and closed while the hands remain on the steer-ing wheel – termed on-wheel fi nger spreading gestures – andcombined this with a visual interface displaying the device con-trol menu for audio and air conditioning (A/C) on a HUD. Weperformed driver-in-the-loop experiments under various driv-ing scenarios to evaluate the effectiveness of our hybrid inter-face against a traditional tactile interface. A preliminary studywas conducted, and its results have been presented (Lee, Yoon,& Shin, 2015) prior to this work. In response to comments fromreviewers and attendees at the presentation, we redesignedand conducted the experiment with more participants and ex-panded its contents signi fi cantly. The contributions of the studyare as follows: ! We propose a new type of gesture called on-wheel fi ngerspreading gestures, which enable the driver to keep their handson the steering wheel and are easy to perform regardless ofthe rotation position of the steering wheel. ! We propose a new user interface for in-vehicle devices us-ing on-wheel fi nger gestures and a HUD for naturalistic in-put and information display, which can reduce distractionsto the driver and enhance response times to avoid dangersin unexpected or hazardous road conditions. ! We verify the effectiveness of the new interface by conduct-ing human-in-the-loop experiments using a driving simula-tor and comparing the results with those of a traditional in-terface. ! The results show that the proposed interface was around20% faster in emergency response than the traditional tac-tile interface, whereas the performance of the new interfacein maintaining vehicle speed and lane did not differ signi fi -cantly from that of the traditional interface.This paper is organized as follows. Section 2 surveys relatedwork on gestural or multimodal interfaces for in-vehicle sys-tems. Section 3 introduces our newly proposed interface foraudio and A/C systems, which was developed based aroundon-wheel gesture controls and a HUD. Section 4 describes D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
30 May 2020 C ! The Author(s) 2020. Published by Oxford University Press on behalf of the Society for Computational Design and Engineering. This is an Open Accessarticle distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permitsunrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 touching a screen or pushing a button on the central console,are major factors in traf fi c accidents (Klauer, Dingus, Neale,Sudweeks, & Ramsey, 2006). An investigation conducted bythe American Automobile Association in 2006, which classi fi eddistraction-inducing activities into various types and analysedthem according to their weighted contributions to driver dis-traction, showed that tasks such as turning on or controlling anaudio device accounted for the highest proportion of distractingactivities (72.5%) (Stutts et al., 2003). Thus, statistical analyses ofthe causes of traf fi c accidents indicate a need to consider humanfactors when designing in-vehicle systems in order to minimizetheir potential for distraction and the possibility of contributingto an accident.New types of in-vehicle control systems that support naturalinteractions between humans and their devices are being devel-oped (Hong & Woo, 2008). In-vehicle system controls can be clas-si fi ed into four types: tactile, touchscreen, speech, and gesturecontrol (Bach, Jaeger, Skov, & Thomassen, 2008). Traditional tac-tile controls include push buttons and rotary switches, whereasmore recent forms of tangible control use touchscreens, whichmay cause signi fi cant visual and biomechanical distractions ifthey are located away from the driver’s line of sight and hand po-sition, for example, on the central console. Research has shownthat touchscreen controls cause more severe visual and cogni-tive distractions than traditional tactile controls (Tsimhoni &Green, 2001; Noy, Lemoine, Klachan, & Burns, 2004; Bellotti, DeGloria, Montanari, Dosio, & Morreale, 2005). Speech control is themost natural form of control and causes little visual and biome-chanical distraction because it facilitates hands-free and eye-attention-free interaction; however, it can contribute to auditorydistractions (Gellatly, 1997; Bar ´on & Green, 2006). Gesture controlis considered an alternative to speech control and has the poten-tial to overcome some of the weaknesses of the latter. As withspeech control, gesture control is natural and can reduce visualdistractions; however, it may cause biomechanical distractions,and the types of systems using such control are currently lim-ited.To overcome the problems of gesture control, it is neces-sary to develop gestures that allow the driver to keep theirhands on the steering wheel and their eyes on the road (Werner,2014). The Geremin system sensed 17 distinct index fi nger move-ments away from the wheel as gestures (Endres, Schwartz, &M ¨uller, 2011). The WheelSense system used an ergonomic anal-ysis of hand positions on the steering wheel to evaluate fourdifferent wheel grip gestures: two forms of rotation, dragging,and squeezing (Angelini et al., 2013). Several research groups(Gonz´alez, Wobbrock, Chau, Faulring, & Myers, 2007; D ¨oring etal., 2011; Ulrich et al., 2013) have used different variations ofthumb-based gestures on a touch-sensitive surface on the steer-ing wheel with small sets of gestures (4 to 19) to allow gestureinteractions while holding the steering wheel with both hands.All on-wheel gesture interactions rely on a small number of dis-tinguishable thumb or fi nger gestures as their main interactiontechnique. Consequently, they impose several limitations on theuser relating to their limitation in number, vague and impre-cise meaning, dif fi culty in remembering, and the lack of passivefeedback, i.e. feedback on the current status of the systems andvariable values.Combining gestures with different types of feedback maybe a solution to overcome those limitations and could createa powerful and diverse in-vehicle interaction platform. Whentouchscreens are used as a gesture input, they frequently de-mand visual feedback, although eyes-free gestures on touch-screens can be facilitated by also providing auditory feedback (Bach et al., 2008), as explored by Angelini et al. (2013). An-other common approach for ‘eyes on the road’ gestures is pro-viding visual feedback with a head-up display (HUD) (Koyamaet al., 2014). Numerous studies have illustrated the bene fi ts ofHUDs over the head-down displays (HDDs) for the presenta-tion of information related to the operation of the in-vehiclesystem as well as the vehicle itself (Sojourner & Antin, 1990;Kiefer, 1998; Liu & Wen, 2004; Prinzel & Risser, 2004; Weinberg,Harsham, & Medenica, 2011; Lauber, Follmann, & Butz, 2014;Skrypchuk, Langdon, Sawyer, & Clarkson, 2020). They have iden-ti fi ed shorter display–road transition and eye accommodationtimes (Sojourner & Antin, 1990; Kiefer, 1998) and found the HUD-based interface had a low impact on mental load and scoredhighest in user satisfaction (Weinberg et al., 2011) in compar-ison to HDDs. However, there are also problems with HUDsas summarized in Prinzel and Risser (2004), mainly known ascognitive capture, attention capture, or perceptual tunnelling.To upgrade the ef fi ciency and safety, a HUD system must pro-vide drivers with not only large amounts of information frommany categories (e.g. route guidance/navigation, traf fi c signs,cargo/road/vehicle conditions) but also the best way to displaythis information; important considerations include having auser-friendly system, since a driver’s capacity to process this in-formation is a key factor in its acceptance and use (Liu & Wen,2004).In this work, we developed a new form of gesture controlwhere a speci fi c number of fi ngers of the left and/or right handare opened and closed while the hands remain on the steer-ing wheel – termed on-wheel fi nger spreading gestures – andcombined this with a visual interface displaying the device con-trol menu for audio and air conditioning (A/C) on a HUD. Weperformed driver-in-the-loop experiments under various driv-ing scenarios to evaluate the effectiveness of our hybrid inter-face against a traditional tactile interface. A preliminary studywas conducted, and its results have been presented (Lee, Yoon,& Shin, 2015) prior to this work. In response to comments fromreviewers and attendees at the presentation, we redesignedand conducted the experiment with more participants and ex-panded its contents signi fi cantly. The contributions of the studyare as follows: ! We propose a new type of gesture called on-wheel fi ngerspreading gestures, which enable the driver to keep their handson the steering wheel and are easy to perform regardless ofthe rotation position of the steering wheel. ! We propose a new user interface for in-vehicle devices us-ing on-wheel fi nger gestures and a HUD for naturalistic in-put and information display, which can reduce distractionsto the driver and enhance response times to avoid dangersin unexpected or hazardous road conditions. ! We verify the effectiveness of the new interface by conduct-ing human-in-the-loop experiments using a driving simula-tor and comparing the results with those of a traditional in-terface. ! The results show that the proposed interface was around20% faster in emergency response than the traditional tac-tile interface, whereas the performance of the new interfacein maintaining vehicle speed and lane did not differ signi fi -cantly from that of the traditional interface.This paper is organized as follows. Section 2 surveys relatedwork on gestural or multimodal interfaces for in-vehicle sys-tems. Section 3 introduces our newly proposed interface foraudio and A/C systems, which was developed based aroundon-wheel gesture controls and a HUD. Section 4 describes D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays human-in-the-loop experiments conducted using a driving sim-ulator. Section 5 presents a quantitative comparison and evalu-ation of the effectiveness between the traditional and proposedinterfaces, together with a statistical analysis of the experimen-tal results. Section 6 discusses the main fi ndings from the exper-iment and the limitations of this study. Section 7 summarizesthe results and proposes future work.
2. Literature Review
Gonz´alez et al. (2007) stressed the importance of designing ges-tural interfaces that help the user to maintain ‘eyes on theroad and hands on the wheel’. They embedded a small touch-pad called a StampPad in a steering wheel and evaluated sevenmethods for selecting from a list of over 3000 street names. Se-lection speed was measured while being stationary and whiledriving a simulator. The results showed that the EdgeWrite ges-tural text entry method was approximately 20 to 50% faster thanselection-based text entry or direct list-selection methods. Theyalso showed that a faster driving speed generally resulted inslower selection speed. However, with EdgeWrite, participantswere able to maintain their speed and avoid incidents while se-lecting and driving at the same time. This work was a fi rst at-tempt to explore and evaluate emerging on-wheel hand gesturesadopted in automotive settings. Nonetheless, since this systemrequires two touchpads to be mounted at the 2 o’clock and 10o’clock positions, it is dif fi cult to use this interface when thesteering wheel is rotated. In contrast, the gestures proposed inour system can be used independently of the rotational positionof the wheel.Bach et al. (2008) presented an approach towards in-vehiclegestural interaction, comparing tactile, touchscreen, and gestu-ral interaction to control a radio. For gestural inputs, a touch-screen mounted on the vertical centre stack was used. They fo-cused on the effects on driving and evaluated three interactiontechniques using 16 subjects under two different settings. Theirresults indicated that, although gestural interaction is slowerthan touch or haptic interaction, it can reduce distractive eyeglances while interacting with the radio. In their work, the ges-ture interface uses a touchscreen as a drawing canvas, thus theright hand is taken off the steering wheel. On the contrary, ourgestures are made while both hands remain on the wheel, whichcan reduce both biomechanical and visual distractions.D ¨oring et al. (2011) developed a user-de fi ned set of steeringwheel gestures and a working prototype based on a multi-touchsteering wheel for gesture identi fi cation and compared theirapplication against conventional user interaction with an info-tainment system, considering driver distraction. Their resultsshowed that using gestures reduces the visual demand for inter-action tasks; however, gestures introduce a similar problem asbuttons: scalability. By using gestures that do not need visual at-tention, the gesture rapidly becomes complicated and harder toremember. The use of touch interaction relating to the displayedcontent on the screen reduces the bene fi t of reduced visual at-tention. This study motivated us to investigate a user interfacecombining on-wheel gestures with a graphic interface using aHUD. Since D ¨oring et al. used a system with a multi-touch can-vas on the steering wheel for touch input and graphical output,the driver’s gaze can be taken off the road. To address this prob-lem, we introduce a HUD for graphical output and on-wheel fi n-ger gestures for non-contact input.P fl eging, Schneegass, and Schmidt (2012) proposed a multi-modal interaction system combining speech and gesture inter- faces, in which voice commands are used to select visible ob-jects or functions while simple touch gestures are used to con-trol these functions. With such an approach, it is simpler to re-call voice commands as users see what they need to say. Usinga simple touch gesture, the interaction style lowers visual de-mand and simultaneously provides immediate feedback and aneasy means for undoing unwanted actions. A set of user-elicitedgestures and common voice commands were determined in auser-centred system design process. In an experiment with 16participants, P fl eging et al. explored the impact of this form ofmultimodal interaction on driving performance against a base-line using physical buttons. Their results indicate that the useof speech and gestures is slower than using buttons but resultsin similar driving performance. Users commented in a ques-tionnaire that the visual demand was lower when using speechand gestures. The overall distraction of this multimodal interac-tion is comparable to the current interaction approach but offersgreater fl exibility. In particular, voice controls can cut down onthe number of clicks required in multi-levels of a menu struc-ture; however, naming objects and functions could be dif fi cult,and the users may be required to learn and remember them. Toavoid these issues, we do not adopt voice controls but insteaduse HUD and audio feedbacks to con fi rm that a selection wasperformed correctly.Angelini et al. (2013) presented a novel opportunisticparadigm for in-vehicle gesture recognition allowing the use oftwo or more subsystems in a synergistic manner. In order to seg-ment and recognize micro-gestures performed by the user onthe steering wheel, they combined a wearable approach basedon the electromyography of the user’s forearm muscles with anenvironmental approach based on pressure sensors directly in-tegrated into the steering wheel. Several fusion methods andgesture segmentation strategies were presented and analysed,whereupon a prototype was developed and evaluated with datafrom nine subjects. Their results showed that the proposed op-portunistic system performs equal to or better than each stand-alone subsystem while increasing interaction possibilities. Themicro-gesture interface using pressure sensors helps the userto keep their attention on the road and their hands on the steer-ing wheel. However, attaching electromyography (EMG) sensorsto the user’s arms is not practical. Our current method usesthe same advantageous aspects of on-wheel micro-gestures. Al-though we currently use data gloves for recognizing fi nger ges-tures, we plan to replace them with the fusion of pressure andvision sensors in the future.Angelini et al. (2014) presented the results of a user elicita-tion study for gestures performed on the surface of the steer-ing wheel. Forty participants elicited a total of 240 gestures. Thestudy provided useful information about gestures that users arelikely to expect in an in-vehicle gestural interface. This infor-mation could help in the design of steering wheels that detectgestures on their surfaces. Thus, technologies based on prox-imity, capacitive, or pressure sensors can be used to provide aninteraction compliant with the ‘eyes on the road, hands on thewheel’ paradigm, if coupled with proper feedback such as a HUD.We adopted the results of this study to combine our gestural in-terface with a HUD for visual feedback.Koyama et al. (2014) developed a multi-touch steering wheelwhere touch positions correspond to different operating posi-tions in order to control the information offered by in-car ter-tiary applications. Thereby, drivers could operate applicationsat any position on the steering wheel. One hundred and twentyinfrared sensors were embedded in the steering wheel and thesystem was trained using a Support Vector Machine algorithm D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Gonz´alez et al. (2007) stressed the importance of designing ges-tural interfaces that help the user to maintain ‘eyes on theroad and hands on the wheel’. They embedded a small touch-pad called a StampPad in a steering wheel and evaluated sevenmethods for selecting from a list of over 3000 street names. Se-lection speed was measured while being stationary and whiledriving a simulator. The results showed that the EdgeWrite ges-tural text entry method was approximately 20 to 50% faster thanselection-based text entry or direct list-selection methods. Theyalso showed that a faster driving speed generally resulted inslower selection speed. However, with EdgeWrite, participantswere able to maintain their speed and avoid incidents while se-lecting and driving at the same time. This work was a fi rst at-tempt to explore and evaluate emerging on-wheel hand gesturesadopted in automotive settings. Nonetheless, since this systemrequires two touchpads to be mounted at the 2 o’clock and 10o’clock positions, it is dif fi cult to use this interface when thesteering wheel is rotated. In contrast, the gestures proposed inour system can be used independently of the rotational positionof the wheel.Bach et al. (2008) presented an approach towards in-vehiclegestural interaction, comparing tactile, touchscreen, and gestu-ral interaction to control a radio. For gestural inputs, a touch-screen mounted on the vertical centre stack was used. They fo-cused on the effects on driving and evaluated three interactiontechniques using 16 subjects under two different settings. Theirresults indicated that, although gestural interaction is slowerthan touch or haptic interaction, it can reduce distractive eyeglances while interacting with the radio. In their work, the ges-ture interface uses a touchscreen as a drawing canvas, thus theright hand is taken off the steering wheel. On the contrary, ourgestures are made while both hands remain on the wheel, whichcan reduce both biomechanical and visual distractions.D ¨oring et al. (2011) developed a user-de fi ned set of steeringwheel gestures and a working prototype based on a multi-touchsteering wheel for gesture identi fi cation and compared theirapplication against conventional user interaction with an info-tainment system, considering driver distraction. Their resultsshowed that using gestures reduces the visual demand for inter-action tasks; however, gestures introduce a similar problem asbuttons: scalability. By using gestures that do not need visual at-tention, the gesture rapidly becomes complicated and harder toremember. The use of touch interaction relating to the displayedcontent on the screen reduces the bene fi t of reduced visual at-tention. This study motivated us to investigate a user interfacecombining on-wheel gestures with a graphic interface using aHUD. Since D ¨oring et al. used a system with a multi-touch can-vas on the steering wheel for touch input and graphical output,the driver’s gaze can be taken off the road. To address this prob-lem, we introduce a HUD for graphical output and on-wheel fi n-ger gestures for non-contact input.P fl eging, Schneegass, and Schmidt (2012) proposed a multi-modal interaction system combining speech and gesture inter- faces, in which voice commands are used to select visible ob-jects or functions while simple touch gestures are used to con-trol these functions. With such an approach, it is simpler to re-call voice commands as users see what they need to say. Usinga simple touch gesture, the interaction style lowers visual de-mand and simultaneously provides immediate feedback and aneasy means for undoing unwanted actions. A set of user-elicitedgestures and common voice commands were determined in auser-centred system design process. In an experiment with 16participants, P fl eging et al. explored the impact of this form ofmultimodal interaction on driving performance against a base-line using physical buttons. Their results indicate that the useof speech and gestures is slower than using buttons but resultsin similar driving performance. Users commented in a ques-tionnaire that the visual demand was lower when using speechand gestures. The overall distraction of this multimodal interac-tion is comparable to the current interaction approach but offersgreater fl exibility. In particular, voice controls can cut down onthe number of clicks required in multi-levels of a menu struc-ture; however, naming objects and functions could be dif fi cult,and the users may be required to learn and remember them. Toavoid these issues, we do not adopt voice controls but insteaduse HUD and audio feedbacks to con fi rm that a selection wasperformed correctly.Angelini et al. (2013) presented a novel opportunisticparadigm for in-vehicle gesture recognition allowing the use oftwo or more subsystems in a synergistic manner. In order to seg-ment and recognize micro-gestures performed by the user onthe steering wheel, they combined a wearable approach basedon the electromyography of the user’s forearm muscles with anenvironmental approach based on pressure sensors directly in-tegrated into the steering wheel. Several fusion methods andgesture segmentation strategies were presented and analysed,whereupon a prototype was developed and evaluated with datafrom nine subjects. Their results showed that the proposed op-portunistic system performs equal to or better than each stand-alone subsystem while increasing interaction possibilities. Themicro-gesture interface using pressure sensors helps the userto keep their attention on the road and their hands on the steer-ing wheel. However, attaching electromyography (EMG) sensorsto the user’s arms is not practical. Our current method usesthe same advantageous aspects of on-wheel micro-gestures. Al-though we currently use data gloves for recognizing fi nger ges-tures, we plan to replace them with the fusion of pressure andvision sensors in the future.Angelini et al. (2014) presented the results of a user elicita-tion study for gestures performed on the surface of the steer-ing wheel. Forty participants elicited a total of 240 gestures. Thestudy provided useful information about gestures that users arelikely to expect in an in-vehicle gestural interface. This infor-mation could help in the design of steering wheels that detectgestures on their surfaces. Thus, technologies based on prox-imity, capacitive, or pressure sensors can be used to provide aninteraction compliant with the ‘eyes on the road, hands on thewheel’ paradigm, if coupled with proper feedback such as a HUD.We adopted the results of this study to combine our gestural in-terface with a HUD for visual feedback.Koyama et al. (2014) developed a multi-touch steering wheelwhere touch positions correspond to different operating posi-tions in order to control the information offered by in-car ter-tiary applications. Thereby, drivers could operate applicationsat any position on the steering wheel. One hundred and twentyinfrared sensors were embedded in the steering wheel and thesystem was trained using a Support Vector Machine algorithm D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Figure 1:
Head and eye movements and hand positions during in-vehicle device control (Lee et al., 2015): (a) a traditional interface using a tactile interface on thecentral console and (b) a new interface using on-wheel gestures and a HUD.
Function selection gesture Detailed function selection (numerical control) gestures Function selection gesture Device turn-off gesture
Channel increase Channel decrease Volume increase Volume decreaseLeft-hand index raise Left-hand index raise Left-hand thumb twist Right-hand index raise Right-hand thumb twist Left-hand index raise Left-hand spread
Figure 2:
A series of on-wheel gestures to control radio and audio devices proposed by Kang (2012). to recognize different hand gestures ( fl ick, click, tap, stroke, andtwist). Additionally, navigation and audio applications were im-plemented on the proposed interface. Koyama et al. conducteda user study for navigation application and found an average fl ick recognition rate of about 92%. However, to select a func-tion, the driver had to browse the menu using a fl icking gestureuntil reaching the desired function. This selection process maycause visual and cognitive distractions. The menu structure ofan application thus needs to be designed taking into considera-tion the cognitive workload and visual demand.Riener (2012) surveyed in-vehicle gestural interfaces andsummarized the advantages and disadvantages of various fi nger- and hand gesture recognition systems. Ha and Ko (2015)proposed a vision-based shadow-gesture recognition methodfor interactive projection systems, which separated the shadowarea, isolated hand shadows to distinguish hand gestures, and tracked the fi ngertips using an optical fl ow algorithm. Rem-pel, Camilleri, and Lee (2004) studied discomfort and fatigueresulting from hand gestures associated with characters andwords used by professional sign language interpreters. Wachs,K ¨olsch, Stern, and Edan (2011) extensively evaluated the de-sign of hand gestures and recommended four guidelines for fu-ture hand gesture interfaces to increase the likelihood of theirwidespread commercial or social acceptance: validation, userindependence, usability criteria, and qualitative/quantitativeassessment.Weinberg et al. (2011) evaluated the usability of a HUD forselection from choice lists in car. The experiments on a driv-ing simulator showed that the HUD had a low impact on mentalload and scored highest in user satisfaction among three out-put system variants for in-vehicle systems: a HDD, a HUD, andan auditory display. Lauber et al. (2014) presented the ’What You D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
A series of on-wheel gestures to control radio and audio devices proposed by Kang (2012). to recognize different hand gestures ( fl ick, click, tap, stroke, andtwist). Additionally, navigation and audio applications were im-plemented on the proposed interface. Koyama et al. conducteda user study for navigation application and found an average fl ick recognition rate of about 92%. However, to select a func-tion, the driver had to browse the menu using a fl icking gestureuntil reaching the desired function. This selection process maycause visual and cognitive distractions. The menu structure ofan application thus needs to be designed taking into considera-tion the cognitive workload and visual demand.Riener (2012) surveyed in-vehicle gestural interfaces andsummarized the advantages and disadvantages of various fi nger- and hand gesture recognition systems. Ha and Ko (2015)proposed a vision-based shadow-gesture recognition methodfor interactive projection systems, which separated the shadowarea, isolated hand shadows to distinguish hand gestures, and tracked the fi ngertips using an optical fl ow algorithm. Rem-pel, Camilleri, and Lee (2004) studied discomfort and fatigueresulting from hand gestures associated with characters andwords used by professional sign language interpreters. Wachs,K ¨olsch, Stern, and Edan (2011) extensively evaluated the de-sign of hand gestures and recommended four guidelines for fu-ture hand gesture interfaces to increase the likelihood of theirwidespread commercial or social acceptance: validation, userindependence, usability criteria, and qualitative/quantitativeassessment.Weinberg et al. (2011) evaluated the usability of a HUD forselection from choice lists in car. The experiments on a driv-ing simulator showed that the HUD had a low impact on mentalload and scored highest in user satisfaction among three out-put system variants for in-vehicle systems: a HDD, a HUD, andan auditory display. Lauber et al. (2014) presented the ’What You D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Figure 3:
Spreading the fi ngers to indicate different numbers using on-wheel fi nger spreading gestures: indicating (a) one by spreading the index fi nger; (b) two byspreading the index and middle fi ngers; (c) three by spreading the index, middle, and ring fi ngers; (d) four by spreading four fi ngers excluding the thumb; (e) one byspreading the little fi nger; and (f) two by spreading the ring and little fi ngers. Figure 4:
Essential Reality P5 Gaming Glove: (a) original right-hand glove and (b) customized right- and left-hand gloves.
See Is What You Touch’ (WYSIWYT) technique for touchscreeninteraction that no longer requires any direct visual attention onthe touchscreen itself. Instead, its content as well as a represen-tation of the user’s fi nger is displayed in the HUD. This createsa shorter distance between the display location and the roadscene, which in turn allows bene fi cial gaze behaviour. Insteadof having to fully avert the eyes from the road, the driver canswitch their focus back and forth during interaction. They havecombined this approach with pointing gestures and introduceseveral variations of the WYSIWYT technique, some of whichallow users to even interact with the touchscreen without actu-ally touching it.Shim and Lee (2016) proposed an in-vehicle spatial gesturecontrol combined with a HUD to reduce visual distractions fordrivers. They selected the controls used most frequently for au-dio and A/C devices in the central console and implemented these controls using HUD menus and a hand-motion recogni-tion system. Recently, we proposed a new driver interface thatreplaces spatial gesture control with on-wheel gesture control inwhich the driver spreads and closes the fi ngers while the handsremain on the steering wheel (Lee et al., 2015). We compared ourproposed system with a traditional interface using tactile con-trol and a HDD using quantitative and objective measurementsas well as qualitative and subjective evaluations. However, ourstudy had limitations not only in the scenarios and participantsin the experiment but also in objective measurements. There-fore, in response to comments from reviewers and attendees atits presentation, we redesigned the system and conducted theexperiment with more participants and expanded its contentssigni fi cantly. We also considered the large volume of researchon the application of virtual or augmented reality to a user in-terface, which may be applied to user interfaces in the future D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
See Is What You Touch’ (WYSIWYT) technique for touchscreeninteraction that no longer requires any direct visual attention onthe touchscreen itself. Instead, its content as well as a represen-tation of the user’s fi nger is displayed in the HUD. This createsa shorter distance between the display location and the roadscene, which in turn allows bene fi cial gaze behaviour. Insteadof having to fully avert the eyes from the road, the driver canswitch their focus back and forth during interaction. They havecombined this approach with pointing gestures and introduceseveral variations of the WYSIWYT technique, some of whichallow users to even interact with the touchscreen without actu-ally touching it.Shim and Lee (2016) proposed an in-vehicle spatial gesturecontrol combined with a HUD to reduce visual distractions fordrivers. They selected the controls used most frequently for au-dio and A/C devices in the central console and implemented these controls using HUD menus and a hand-motion recogni-tion system. Recently, we proposed a new driver interface thatreplaces spatial gesture control with on-wheel gesture control inwhich the driver spreads and closes the fi ngers while the handsremain on the steering wheel (Lee et al., 2015). We compared ourproposed system with a traditional interface using tactile con-trol and a HDD using quantitative and objective measurementsas well as qualitative and subjective evaluations. However, ourstudy had limitations not only in the scenarios and participantsin the experiment but also in objective measurements. There-fore, in response to comments from reviewers and attendees atits presentation, we redesigned the system and conducted theexperiment with more participants and expanded its contentssigni fi cantly. We also considered the large volume of researchon the application of virtual or augmented reality to a user in-terface, which may be applied to user interfaces in the future D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Figure 5:
Audio and A/C controls of a Hyundai Sonata on the central console ofthe driving simulator. (Takahashi et al., 2018; Fukuda Yokoi, Yabuki, & Motamedi, 2019;Son, Jang, & Choi, 2019; Sun, Hu, & Xu, 2019).
3. Design of User Interface Combined withOn-wheel Finger Gestures and a HUD fi nger spreading gestures Conventional in-vehicle systems are operated via tactile inter-actions with buttons on the lower part of the unit or by turn-ing the volume controller knob. Thus, the user generally experi-ences tactile feedback when pressing buttons or turning knobs.Visual distraction occurs when the driver’s vision shifts to thecentral console, which is located to the lower right of the steer-ing wheel, in order to execute tactile control of the audio and A/Csystems, as shown in Fig. 1a. In addition, biomechanical distrac-tion occurs when the driver’s hand moves away from the steer- ing wheel. To address these problems, we developed a new formof gesture control in which the fi ngers are allowed to move whilethe hands remain on the steering wheel, as shown in Fig. 1b,together with a new interface that displays the device controlmenu on a HUD to minimize visual distractions.The idea of our fi nger gesture control was inspired by Kang(2012). In his thesis, through user survey, he extracted intuitiveand useful on-wheel gestures and one-hand gestures near thearea of the central console for controlling in-vehicle devices suchas the radio, audio, A/C, and heater. Then, these candidate ges-tures were evaluated by design experts to select examples forinclusion in the fi nal set-up. Next, through the one-to-one inter-views of an expert group, the gestures were mapped to speci fi coperations of in-vehicle devices. Following an interview with agroup of experts, the index raise and thumb twist gestures wereselected as the simplest and most intuitive for controlling in-vehicle devices. The left-hand index raise and left-hand thumbtwist gestures were also chosen to increase and decrease chan-nels in the radio and audio, and the temperatures in the heaterand air conditioner, as shown in the gesture scenario presentedin Fig. 2. The right-hand index raise and right-hand thumb twistgestures were chosen to increase or decrease the volume of theradio and audio, and the air fl ow of the heater and air condi-tioner. The hand spreading gesture was selected to turn off de-vices since this gesture is clearly distinct from the others. Thethumb movement, however, is not well recognized by vision-based gesture recognition systems. To overcome this, it is nec-essary to mount small touchpads on the steering wheel, typi-cally at the 2 o’clock and 10 o’clock positions, as proposed byGonz´alez et al. (2007). However, it is dif fi cult to use touchpadswhen the steering wheel is rotated. Moreover, these proposedgestures have not been evaluated and veri fi ed through human-in-the-loop experiments using a driving simulator.To improve Kang’s on-wheel gestures, we adopted only theindex raise gestures and combine these gestures with a HUD.In this paper, we refer to spreading and closing of the fi ngers Figure 6:
Results of the questionnaire survey used to determine the most frequently used switches on the central console. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Results of the questionnaire survey used to determine the most frequently used switches on the central console. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Figure 7:
HUD graphical user interface: (a) initial menu; (b) radio; (c) MP3; (d) A/C; and (e) heater control. while the hands remain on the steering wheel as on-wheel fi ngerspreading gestures , or simply as on-wheel fi nger gestures . As shownin Fig. 3, such on-wheel fi nger gestures can be applied to oper-ate a speci fi c switch according to the number of spreading fi n-gers while the hands remain on the steering wheel. The reasonfor choosing the number of fi ngers as the output of the gesturesis that there are up to 25 ( = ×
5) choices from the combina-tion of the spread fi nger of the right and left hands. When amultimodal interface is designed by combining gestures and adisplay, many such choices give freedom to design the menusof the graphical user interface (GUI). In this work, gestures wereused to select menus on the HUD, but we did not distinguishwhich fi ngers were spread because some fi nger gestures may bephysically stressful or preferred depending on individual users.On-wheel fi nger gestures facilitate immediate responses to dan-ger because both hands remain on the steering wheel. Moreover,the fi nger that is spread does not need to be a speci fi c fi nger,thereby avoiding differences in fi nger spreading that may occurfrom cultural differences, personal preference, or physical stress(Rempel et al., 2004). A human–machine integrated design andanalysis framework based on motion synthesis and biomechan-ical analysis can be applied for quantitative discomfort evalua-tion of fi nger and hand gestures (Choi & Lee, 2015; Lee, Jung, Lee,& Lee, 2017). In general, gestures were captured in images by a camera orsignals by motion sensors, which were then recognized usingpattern-recognition processes. The aim of this study was notgestured recognition itself; thus, a data glove, and not computervision technology, was used to capture fi nger movements.On-wheel gestures were captured using an Essential Reality P5Gaming Glove. As shown in Fig. 4, the glove was disassembledand its components were attached to a cotton glove, whichallowed the glove to be worn easily and increased the rate ofrecognition of fi nger bending. Furthermore, since only a right-hand glove was available, the components from a disassembledright-hand glove were used to produce a left-hand glove. Thisglove obtains fi nger-bend information, and coordinates infor-mation along the X , Y , and Z axes, when used in conjunctionwith an infrared tracking unit. The fi nger sensors provide fi veindependent fi nger measurements at a 60 Hz refresh rate and0.5 ◦ resolution. The range of fi nger bending is 0 (completelyclosed) to 63 (completely opened). In the system, the thresholdwas set to 30, meaning that the system recognizes a fi nger asstraightened if its bend value is greater than the threshold. Wetested our system using a set of 10 selected natural gestures,performed multiple times by 20 different persons. Our systemis able to recognize fi nger spreads correctly in more than 99%of instances with a false positive rate lower than 0.5%. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
5) choices from the combina-tion of the spread fi nger of the right and left hands. When amultimodal interface is designed by combining gestures and adisplay, many such choices give freedom to design the menusof the graphical user interface (GUI). In this work, gestures wereused to select menus on the HUD, but we did not distinguishwhich fi ngers were spread because some fi nger gestures may bephysically stressful or preferred depending on individual users.On-wheel fi nger gestures facilitate immediate responses to dan-ger because both hands remain on the steering wheel. Moreover,the fi nger that is spread does not need to be a speci fi c fi nger,thereby avoiding differences in fi nger spreading that may occurfrom cultural differences, personal preference, or physical stress(Rempel et al., 2004). A human–machine integrated design andanalysis framework based on motion synthesis and biomechan-ical analysis can be applied for quantitative discomfort evalua-tion of fi nger and hand gestures (Choi & Lee, 2015; Lee, Jung, Lee,& Lee, 2017). In general, gestures were captured in images by a camera orsignals by motion sensors, which were then recognized usingpattern-recognition processes. The aim of this study was notgestured recognition itself; thus, a data glove, and not computervision technology, was used to capture fi nger movements.On-wheel gestures were captured using an Essential Reality P5Gaming Glove. As shown in Fig. 4, the glove was disassembledand its components were attached to a cotton glove, whichallowed the glove to be worn easily and increased the rate ofrecognition of fi nger bending. Furthermore, since only a right-hand glove was available, the components from a disassembledright-hand glove were used to produce a left-hand glove. Thisglove obtains fi nger-bend information, and coordinates infor-mation along the X , Y , and Z axes, when used in conjunctionwith an infrared tracking unit. The fi nger sensors provide fi veindependent fi nger measurements at a 60 Hz refresh rate and0.5 ◦ resolution. The range of fi nger bending is 0 (completelyclosed) to 63 (completely opened). In the system, the thresholdwas set to 30, meaning that the system recognizes a fi nger asstraightened if its bend value is greater than the threshold. Wetested our system using a set of 10 selected natural gestures,performed multiple times by 20 different persons. Our systemis able to recognize fi nger spreads correctly in more than 99%of instances with a false positive rate lower than 0.5%. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 The central console of the Hyundai EF Sonata used in this studycomprised an A/C control at the top and an audio control atthe bottom, as shown in Fig. 5. These controls use push but-tons and rotary switches. The central console was located at thecentre of the cockpit, to the lower right of the steering wheel.The central console contains various switches for the audioand A/C systems. Twenty-four participants were asked to iden-tify the switches that they use most frequently, indicating thatthe ON/OFF switches for the A/C (cooler and heater) and au-dio systems (radio and MP3 player), fan, volume, and thermo-stat were used most frequently, as shown in Fig. 6. Therefore,in the present study, we applied on-wheel gesture controls tothese switches as hotkeys, and all switches on the central con-sole can be used in parallel. However, all switches on the centralconsole can be accommodated in the graphic interface menu, al-though this results in increasing the depth of the menu, whichmay cause cognitive distractions.When creating user interfaces, most designers follow inter-face design principles that represent high-level concepts usedto guide software design. Several fundamental principles, suchas ‘10 Usability Heuristics for UI Design’ (Nielsen, 1995), ‘TheEight Golden Rules of Interface Design’ (Shneiderman, 1998),and ‘First Principles of Interaction Design’ (Tognazzini, 2014), arecommonly used. Among these UI design principles, we selectedand applied the following: (i) Reduce Cognitive Load – our multi-modal interface promoted recognition in UI by making informa-tion and functionality visible. In addition, we added pips aboveicons to reduce the user’s cognitive use to recall the fi nger count.(ii) Make UI Consistent – our UI requires consistent sequences ofactions in similar situations. The heater and cooler share thesame UI, as do the radio and MP3 player. (iii) Offer InformativeFeedback – the UI offers visual feedback together with verbal andsound feedback.In this study, both hands were used, and the left and righthands were distinguished during on-wheel gesturing. Thus, agraphical interface was developed and displayed on the wind-shield, as shown in Figs 7 and 8, based on experience in GUIdesigns of gauge clusters (Eom & Lee, 2015; Lee & Ahn, 2015).The initial menu shows the arrangement of the ON/OFF switchesfor the radio, MP3, A/C, and heater, as shown in Fig. 7a. Theswitches that turn on these devices are shown on the right, andthe switches that turn them off are shown on the left. The pipsabove each switch represent the number of fi ngers that needto be spread. Red indicates the left hand, and blue indicatesthe right hand. After selecting a device from the initial menu,the user is directed to the submenu for controlling the selecteddevice. Figure 7c shows the radio control menu. The selectionmenus for radio channels 1, 2, and 3, and for selecting one chan-nel up and one channel down are arranged in order from thecentre to the left, whereas the menus for selecting one level ofvolume up or down, returning to the previous menu, and turn-ing off the menu are arranged in order from the centre to theright. The MP3 control menu is shown in Fig. 7d. The controlsare arranged from the centre to the left, i.e. random play, mute,and intro play (to browse and preview each fi le for 10 s), whichcan be toggle-operated, and the right-side menu is the same asthat for the radio. The menus for the A/C and heater are shownin Fig. 7e and f, respectively. The four menus on the left indicatethe air outlet directions, whereas the menus on the right indi-cate one level up and one level down for the fan, one level upfor the menu, and control off. We also provide hotkeys to turn Figure 8:
A HUD menu displayed on the monitor. the system on and off and to cancel the current input. To turnthe system on and off, the user must spread two hands. To can-cel the current input and return to the previous state, the usermust spread all the fi ngers of the right hand. To go to the upmostmenu, the user spreads the index fi ngers of both hands.Since gesture recognition systems have inherent accuracylimitation, it is very important to give feedback to the user in or-der to make the recognition process less opaque. For this reason,we upgraded the system to provide audio feedback on the com-pletion of a gesture. When a device is selected on the top menu,the system gives verbal feedback, i.e. the name of the device isheard. When an operational function is selected on a submenu,a simple electronic sound is generated, or the sound of an audiodevice is heard. We chose to implement only the simplest au-dio feedback, retaining more sophisticated audio feedback forfuture work. fi ngerspreading gestures The overall architecture of the system is illustrated in Fig. 9. Fin-ger opening and closing motions are captured by the bend sen-sors of data gloves. The fi nger-bend values are transferred to thegesture recognition module, in which a fi nger is determined to‘spread’ if passing the threshold value. According to the num-ber of opened fi ngers of the right and left hands, a speci fi c iconrepresenting the device or control is selected from the currentmenu on the HUD. Finally, the selected control of the device isexecuted.An example of controlling radio volume using this new inter-face is shown in Fig. 10. The radio is selected when one fi nger isspread, as shown in the fi rst image in the fi gure. Next, spreadingtwo fi ngers on the right-hand increases the volume. Four fi ngersare then spread on the right hand to select the OFF state, leavethe submenu and return to the initial menu screen. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
A HUD menu displayed on the monitor. the system on and off and to cancel the current input. To turnthe system on and off, the user must spread two hands. To can-cel the current input and return to the previous state, the usermust spread all the fi ngers of the right hand. To go to the upmostmenu, the user spreads the index fi ngers of both hands.Since gesture recognition systems have inherent accuracylimitation, it is very important to give feedback to the user in or-der to make the recognition process less opaque. For this reason,we upgraded the system to provide audio feedback on the com-pletion of a gesture. When a device is selected on the top menu,the system gives verbal feedback, i.e. the name of the device isheard. When an operational function is selected on a submenu,a simple electronic sound is generated, or the sound of an audiodevice is heard. We chose to implement only the simplest au-dio feedback, retaining more sophisticated audio feedback forfuture work. fi ngerspreading gestures The overall architecture of the system is illustrated in Fig. 9. Fin-ger opening and closing motions are captured by the bend sen-sors of data gloves. The fi nger-bend values are transferred to thegesture recognition module, in which a fi nger is determined to‘spread’ if passing the threshold value. According to the num-ber of opened fi ngers of the right and left hands, a speci fi c iconrepresenting the device or control is selected from the currentmenu on the HUD. Finally, the selected control of the device isexecuted.An example of controlling radio volume using this new inter-face is shown in Fig. 10. The radio is selected when one fi nger isspread, as shown in the fi rst image in the fi gure. Next, spreadingtwo fi ngers on the right-hand increases the volume. Four fi ngersare then spread on the right hand to select the OFF state, leavethe submenu and return to the initial menu screen. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Figure 9:
Overall architecture of the system equipped with the proposed user interface combined with on-wheel fi nger spreading gestures and a HUD. Figure 10:
Example of radio control using the new interface based around on-wheel fi nger spreading gestures.
4. Method
We compared a traditional interface equipped with tactile con-trol and a HDD with a new interface equipped with non-tactileon-wheel gesture control and a HUD through human-in-the-loop experiments. We investigated their performance in termsof primary and secondary tasks, as well as visual and biome-chanical distractions. Here, the primary task indicates how wellthe driving task is executed, and its performance is evaluated byobserving how well the driver maintained the vehicle at a spe-ci fi c speed and in a speci fi c lane. Secondary tasks involve thecontrol of audio and A/C systems via a central console. Perfor-mance was evaluated by measuring how accurately and rapidlythe driver executed a given secondary task while driving. Fi- nally, visual distractions were evaluated by determining the ex-tent to which the sightline was not focused on the road, whereasbiomechanical distractions were evaluated by determining howmuch the body deviated from the driving position. The move-ments of the head and the user’s line of sight were evaluatedusing head and eye tracking systems. Using poster advertisements and internet bulletin boards, we re-cruited subjects who satis fi ed the following requirements: pos-sessing a driver’s licence, at least 12 months of actual drivingexperience, and good health with no illnesses, regardless of D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
We compared a traditional interface equipped with tactile con-trol and a HDD with a new interface equipped with non-tactileon-wheel gesture control and a HUD through human-in-the-loop experiments. We investigated their performance in termsof primary and secondary tasks, as well as visual and biome-chanical distractions. Here, the primary task indicates how wellthe driving task is executed, and its performance is evaluated byobserving how well the driver maintained the vehicle at a spe-ci fi c speed and in a speci fi c lane. Secondary tasks involve thecontrol of audio and A/C systems via a central console. Perfor-mance was evaluated by measuring how accurately and rapidlythe driver executed a given secondary task while driving. Fi- nally, visual distractions were evaluated by determining the ex-tent to which the sightline was not focused on the road, whereasbiomechanical distractions were evaluated by determining howmuch the body deviated from the driving position. The move-ments of the head and the user’s line of sight were evaluatedusing head and eye tracking systems. Using poster advertisements and internet bulletin boards, we re-cruited subjects who satis fi ed the following requirements: pos-sessing a driver’s licence, at least 12 months of actual drivingexperience, and good health with no illnesses, regardless of D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Figure 11:
Experimental equipment: (a) vehicle mock-up; (b) central console from a 1997 Hyundai EF Sonata; (c) Logitech G27 Racing Wheel; (d) Samsung 40-inch LEDTV; (e) Media First D-8600A monitor holder; and (f) Samsung SP-H03 Pico mini-beam projector.
Figure 12:
Tracking eye movements using Ergoneers Dikablis Professionalglasses. gender and age. In total, 32 subjects participated in the exper-iment, i.e. 28 males and 4 females with an average age of 24.3(SD = = To provide a simulated driving environment, we used UC-win/Road (2019), a 3D road and traf fi c modelling and drivingsimulation software system developed by FORUM8. As shown inFig. 11, the vehicle mock-up comprised a frame of 980 mm × × × length × depth), which was con-structed with aluminium extrusion pro fi les. The mock-up was installed with a central console and a seat from a HyundaiSonata. The steering wheel, brake and accelerator pedals werefrom a Logitech G27 Racing Wheel. The dual-motor force feed-back of the 11-inch steering wheel equipped with a spiral gearallowed the driver’s hands to feel changes in vehicle weightand sliding of the tires. The driver could make 2.5 turns lockto lock. The display equipment comprised three Samsung 40-inch LED TV monitors to provide a fi eld of view of 170 ◦ . A mobileD-8619A monitor holder from Media First Co., able to supporta monitor of up to 55 inches and provide vertical angle con-trol within 25 ◦ and height control within a range of 750–1150mm, was used to hold the monitors. As a HUD, a Samsung SP-H03 Pico mini-beam projector was installed between the steer-ing wheel and the 40-inch monitor at the front. The projector ispalm-sized and delivers 30 ANSI lumens for WVGA (854 × To track the driver’s eye movements, we used Ergoneers Dikab-lis Professional glasses (Fig. 12), which are binocular at a track-ing frequency of 60 Hz (per eye) and fi t over all types of glasses.The scene is recorded at a resolution of 1920 × ◦ and 90 ◦ (Ergoneer Dikablis Glasses, 2016).The glasses are automatically corrected for phase differencesbetween the camera and eye and can be used to monitor thedriver’s gaze in real time. Experiments can also be easily con-ducted because a separate marker is not needed to calculate thedevice’s position. The glasses used can record video informationrelated to the driver’s fi eld of view and gaze. A post-processorcan be used to specify a certain region in the captured imageand obtain the statistics regarding visual fi xation time and fre-quency for this region. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Tracking eye movements using Ergoneers Dikablis Professionalglasses. gender and age. In total, 32 subjects participated in the exper-iment, i.e. 28 males and 4 females with an average age of 24.3(SD = = To provide a simulated driving environment, we used UC-win/Road (2019), a 3D road and traf fi c modelling and drivingsimulation software system developed by FORUM8. As shown inFig. 11, the vehicle mock-up comprised a frame of 980 mm × × × length × depth), which was con-structed with aluminium extrusion pro fi les. The mock-up was installed with a central console and a seat from a HyundaiSonata. The steering wheel, brake and accelerator pedals werefrom a Logitech G27 Racing Wheel. The dual-motor force feed-back of the 11-inch steering wheel equipped with a spiral gearallowed the driver’s hands to feel changes in vehicle weightand sliding of the tires. The driver could make 2.5 turns lockto lock. The display equipment comprised three Samsung 40-inch LED TV monitors to provide a fi eld of view of 170 ◦ . A mobileD-8619A monitor holder from Media First Co., able to supporta monitor of up to 55 inches and provide vertical angle con-trol within 25 ◦ and height control within a range of 750–1150mm, was used to hold the monitors. As a HUD, a Samsung SP-H03 Pico mini-beam projector was installed between the steer-ing wheel and the 40-inch monitor at the front. The projector ispalm-sized and delivers 30 ANSI lumens for WVGA (854 × To track the driver’s eye movements, we used Ergoneers Dikab-lis Professional glasses (Fig. 12), which are binocular at a track-ing frequency of 60 Hz (per eye) and fi t over all types of glasses.The scene is recorded at a resolution of 1920 × ◦ and 90 ◦ (Ergoneer Dikablis Glasses, 2016).The glasses are automatically corrected for phase differencesbetween the camera and eye and can be used to monitor thedriver’s gaze in real time. Experiments can also be easily con-ducted because a separate marker is not needed to calculate thedevice’s position. The glasses used can record video informationrelated to the driver’s fi eld of view and gaze. A post-processorcan be used to specify a certain region in the captured imageand obtain the statistics regarding visual fi xation time and fre-quency for this region. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Figure 13:
Road environment modelling: (a) fi rst road with easy conditions and (b) second road with more curved sections. Figure 14:
Road environment con fi guration: (a) the third road, based on a section of the urban highway in Seoul (Lee et al., 2015) and (b) hazardous conditions on thisroad. Figure 15:
Road task locations: (a) the fi rst easy road and (b) the second dif fi cultroad. Microsoft Kinect was used to construct a head tracking sys-tem capable of capturing head rotations. We used the SingleFace programme from Kinect’s Face Tracking Visualization SDK,which tracks the driver’s face to show an avatar in the left win-dow and the polygon-masked face of the driver in the right win-dow, both in real time. The avatar drawing code was modi fi edto allow the yaw, pitch, and roll angles of the driver’s face to berecorded at frequencies of 60 Hz. UC-win/Road, a 3D urban visualization and transport modellingsoftware system, was used in the experiment to construct theroad environment, which was composed of two roads as shownin Fig. 13. The fi rst road is easy to drive with gently curved sec-tions. The second road is challenging to drive due to having more curved sections. Each road is 4.9 km in length with six lanes(three in each direction). Both roads included multiple curvedsections and unexpected conditions, such as accidents. Therewere no cars on the road except for those involved in accidentconditions.We also modelled a section of the urban highway in Seoul, asshown in Fig. 14. This third road is 11.55 km in length with sixlanes (three in each direction) and has two tunnels and multiplecurved sections. It also includes unexpected conditions, such asaccidents or roadwork sites. For the fi rst and second roads, the participants were asked tocontrol the audio and A/C systems at 16 locations while driv-ing, as shown in Fig. 15. During the experiment, the subjectswere asked to maintain the speci fi ed reference speed of 80 km/hand were asked to drive in the second lane throughout the ex-periment. Table 1 summarizes the locations at which the taskcommands were given, together with the road conditions andvehicle lanes at these locations and the details for each of the16 tasks. Road conditions were classi fi ed into two main types:normal conditions with no danger on the road and hazardousconditions with accidents. Among the 16 tasks shown in Table 1,11 tasks were conducted under normal road conditions, whereasthe remaining 5 tasks were conducted under hazardous condi-tions. The subjects performed the 16 tasks on each road for eachuser interface. Here, the ‘Levels’ in the last column refer to howmany levels the driver should increase or decrease the volumeof an audio device or the air fl ow strength of an air conditioner.For the third road, the subjects controlled the audio andA/C systems at 15 locations while driving 11.55 km from theSeongsan Ramp to the Kookmin University Ramp, as shown inFig. 16. In these experiments, subjects were asked to maintain aspeci fi ed reference speed of 70 km/h. They were asked to drivefrom the starting point of the road using the second lane in the D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Road task locations: (a) the fi rst easy road and (b) the second dif fi cultroad. Microsoft Kinect was used to construct a head tracking sys-tem capable of capturing head rotations. We used the SingleFace programme from Kinect’s Face Tracking Visualization SDK,which tracks the driver’s face to show an avatar in the left win-dow and the polygon-masked face of the driver in the right win-dow, both in real time. The avatar drawing code was modi fi edto allow the yaw, pitch, and roll angles of the driver’s face to berecorded at frequencies of 60 Hz. UC-win/Road, a 3D urban visualization and transport modellingsoftware system, was used in the experiment to construct theroad environment, which was composed of two roads as shownin Fig. 13. The fi rst road is easy to drive with gently curved sec-tions. The second road is challenging to drive due to having more curved sections. Each road is 4.9 km in length with six lanes(three in each direction). Both roads included multiple curvedsections and unexpected conditions, such as accidents. Therewere no cars on the road except for those involved in accidentconditions.We also modelled a section of the urban highway in Seoul, asshown in Fig. 14. This third road is 11.55 km in length with sixlanes (three in each direction) and has two tunnels and multiplecurved sections. It also includes unexpected conditions, such asaccidents or roadwork sites. For the fi rst and second roads, the participants were asked tocontrol the audio and A/C systems at 16 locations while driv-ing, as shown in Fig. 15. During the experiment, the subjectswere asked to maintain the speci fi ed reference speed of 80 km/hand were asked to drive in the second lane throughout the ex-periment. Table 1 summarizes the locations at which the taskcommands were given, together with the road conditions andvehicle lanes at these locations and the details for each of the16 tasks. Road conditions were classi fi ed into two main types:normal conditions with no danger on the road and hazardousconditions with accidents. Among the 16 tasks shown in Table 1,11 tasks were conducted under normal road conditions, whereasthe remaining 5 tasks were conducted under hazardous condi-tions. The subjects performed the 16 tasks on each road for eachuser interface. Here, the ‘Levels’ in the last column refer to howmany levels the driver should increase or decrease the volumeof an audio device or the air fl ow strength of an air conditioner.For the third road, the subjects controlled the audio andA/C systems at 15 locations while driving 11.55 km from theSeongsan Ramp to the Kookmin University Ramp, as shown inFig. 16. In these experiments, subjects were asked to maintain aspeci fi ed reference speed of 70 km/h. They were asked to drivefrom the starting point of the road using the second lane in the D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Table 1:
Tasks executed by the subjects on the fi rst and second road.Task no. Location Road condition Task detailsDevice Feature Control Levels1 100 m Normal Radio Turn On2 400 m Normal Radio Volume Up 23 700 m Normal A/C Turn On4 1000 m Accident A/C Top Select5 1300 m Normal Heater Turn On6 1600 m Normal Heater Air fl ow Down 17 1900 m Accident MP3 Turn On8 2200 m Normal MP3 MUTE Select9 2500 m Accident Heater Turn On10 2800 m Normal Heater Bottom Select11 3100 m Normal MP3 Turn On12 3400 m Accident MP3 Volume Up 113 3700 m Normal Radio Turn On14 4000 m Normal Radio CH1 Select15 4300 m Accident A/C Turn On16 4600 m Normal A/C Air fl ow Up 2 Figure 16:
Task locations on the third road, representing a section of the urban highway in Seoul. fi rst lane in the 3850–7350 m section,and the second lane in the 7300–11 550 m section. Table 2 sum-marizes the locations at which task commands were given, theroad conditions and vehicle lanes at these locations, and the de-tails for each of the 15 tasks. The road conditions were classi- fi ed into two main types: normal conditions with no danger on astraight road; and hazardous conditions with sharp curves, ac-cidents, road construction, cuttings, or tunnels. Among the 15tasks shown in Table 2, 8 were performed under normal roadsections and the remaining 7 tasks were conducted along haz-ardous sections of the road. t Before the experiment, the subjects were informed about theaims of the study and the methods employed and gave theirwritten consent to participate. They practised driving in the sim-ulator to familiarize themselves with the environment and exer-cised on-wheel gesture control and traditional push-button androtary-switch controls for 50 min. To explore their learning rates,they undertook a test after practising 32 tasks for each interface.The average task completion times for each test were recordedfor subsequent analysis. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Task locations on the third road, representing a section of the urban highway in Seoul. fi rst lane in the 3850–7350 m section,and the second lane in the 7300–11 550 m section. Table 2 sum-marizes the locations at which task commands were given, theroad conditions and vehicle lanes at these locations, and the de-tails for each of the 15 tasks. The road conditions were classi- fi ed into two main types: normal conditions with no danger on astraight road; and hazardous conditions with sharp curves, ac-cidents, road construction, cuttings, or tunnels. Among the 15tasks shown in Table 2, 8 were performed under normal roadsections and the remaining 7 tasks were conducted along haz-ardous sections of the road. t Before the experiment, the subjects were informed about theaims of the study and the methods employed and gave theirwritten consent to participate. They practised driving in the sim-ulator to familiarize themselves with the environment and exer-cised on-wheel gesture control and traditional push-button androtary-switch controls for 50 min. To explore their learning rates,they undertook a test after practising 32 tasks for each interface.The average task completion times for each test were recordedfor subsequent analysis. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Table 2:
Tasks executed by the subjects on the third road.Task no. Location Road condition Task detailsDevice Feature Control Levels1 900 m Normal Radio Volume Up 22 1270 m Normal MP3 Volume Down 33 1900 m Sharp curve A/C Air fl ow Up 24 2900 m Sharp curve Heater Air fl ow Down 15 3480 m Normal Radio CH1 Select6 3850 m Normal MP3 Mute Select7 4230 m Sharp curve A/C Top Select8 4390 m Normal Radio Volume Down 39 5310 m Cut In MP3 Volume Up 210 5970 m Tunnel Heater Air fl ow Up 311 6700 m Sharp curve A/C Air fl ow Down 212 7350 m Normal Radio CH2 Select13 9120 m Normal MP3 Random Select14 10 530 m Normal MP3 Volume Up 515 11 110 m Sharp curve Heater Bottom Select Table 3:
Dependent variables used in the experiments.Class Sub-class Variable UnitObjective measurement Primary task performance Speed deviation km/hLateral distance deviation %Brake response time secSecondary task performance Task completion time secEye movement AOI attention ratio %Mean fi xation duration secHorizontal eye activity pixelVertical eye activity pixelHead movement Head’s yaw, pitch, roll degreeSubjective evaluation Primary task performance Maintaining speed 1–5Maintaining lane 1–5Secondary task performance Performing secondary task 1–5Con fi rming setting 1–5Looking at road 1–5 Figure 17:
Speed deviation effects for the three roads on which the subjects weretested.
To minimize the effects of training under speci fi c experimen-tal conditions, the subjects were divided into two groups, eachof which accessed the two interfaces used in the experiment ina different order. The 32 subjects were divided into two groupsof 16 subjects; the fi rst group used the tactile interface fi rst,whereas the second group used the newly proposed multimodalinterface fi rst. The tasks for the given scenario were completedwithin approximately 4 min using each interface. At the locationfor a speci fi c task, a command was broadcast from a speaker, atwhich point the subjects began executing the task. Immediatelyafter completing the task, the subjects were required to verballyreport ‘end’ to the experimenter. After the experiment, the sub-jects were required to complete a questionnaire regarding theexperiment. The experiment lasted approximately 3.5 h for eachsubject. The independent variables used in this study were categorical.Two different levels represented the tactile and newly proposed D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
To minimize the effects of training under speci fi c experimen-tal conditions, the subjects were divided into two groups, eachof which accessed the two interfaces used in the experiment ina different order. The 32 subjects were divided into two groupsof 16 subjects; the fi rst group used the tactile interface fi rst,whereas the second group used the newly proposed multimodalinterface fi rst. The tasks for the given scenario were completedwithin approximately 4 min using each interface. At the locationfor a speci fi c task, a command was broadcast from a speaker, atwhich point the subjects began executing the task. Immediatelyafter completing the task, the subjects were required to verballyreport ‘end’ to the experimenter. After the experiment, the sub-jects were required to complete a questionnaire regarding theexperiment. The experiment lasted approximately 3.5 h for eachsubject. The independent variables used in this study were categorical.Two different levels represented the tactile and newly proposed D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Figure 18:
Speed deviation effects: (a) on the fi rst easy road and (b) on the second challenging road. visual and gestural interfaces for the devices on the central con-sole. As shown in Table 3, the dependent variables were clas-si fi ed into two main types: objective measurements and sub-jective evaluations. These variables were designed according tothe approach presented in (Bach, Jæger, Skov, & Thomassen,2009). Objective measurement variables are values related to theperformance of the primary driving task, i.e. speed and lane-keeping capability. The values related to the performance of thesecondary task include task completion rate and task comple-tion time for in-vehicle system control, while values related todistraction were obtained from measurements of the driver’s eyemovements. Subjective evaluation variables comprise values ob-tained from the questionnaire responses regarding the level ofdif fi culty in maintaining speed and lane, task completion, set-ting con fi rmation, and looking forward (i.e. to the road) whileoperating the device.
5. Results
We established and tested the following null hypothesis: thatthere is no difference in performance between the tactile and thenewly proposed multimodal interfaces, together with an alter-native hypothesis, i.e. that there is a difference between them,where one is superior to the other. A repeated measures ANOVA(ANalysis Of VAriance) was performed to analyse the effect ofeach variable of objective measurement. A Wilcoxon signed-rank test was used for subjective evaluation through a question-naire survey. The signi fi cance level (or α level) was set at 5%, thusthe null hypothesis was rejected, and the alternative hypothesisaccepted, if P < The subjects were required to drive at a constant speed duringthe experiment. Speed deviation was therefore used to evalu-ate the effects of conducting secondary tasks on driver perfor-mance. Speed deviation was computed as the root-mean-squareerror between the actual speed and the speci fi ed speed duringthe execution of tasks. Figure 17 shows the speed deviationsduring normal driving and when using each user interface. Theresults of a repeated measures one-way ANOVA showed thatthere were no signi fi cant differences among the no-task base-line, the tactile interface, and the visual and gestural interface Figure 19:
Lateral deviation effects on the three roads. (F = P > fi cantly (F = P > fi rst easy roadand the second dif fi cult road. According to the results of a re-peated measures one-way ANOVA for these two roads, therewere no signi fi cant differences among the no-task baseline, tac-tile interface, and visual and gestural interface (F = P > fi cant differences be-tween the easy and dif fi cult roads (F = P > = P > = P > = P > = P > Lateral deviation measures the driver’s ability to maintain a cen-tral lane position and is a common measurement in other driverdistraction studies. We computed lateral deviation as the root-mean-squared error of the lateral distance between the vehi-cle’s centre and the lane centre. Figure 19 shows the resultsof an ANOVA indicating that the lateral deviation effect showsno signi fi cant changes among the three conditions (F = P > fi cantly (F = P > D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Lateral deviation effects on the three roads. (F = P > fi cantly (F = P > fi rst easy roadand the second dif fi cult road. According to the results of a re-peated measures one-way ANOVA for these two roads, therewere no signi fi cant differences among the no-task baseline, tac-tile interface, and visual and gestural interface (F = P > fi cant differences be-tween the easy and dif fi cult roads (F = P > = P > = P > = P > = P > Lateral deviation measures the driver’s ability to maintain a cen-tral lane position and is a common measurement in other driverdistraction studies. We computed lateral deviation as the root-mean-squared error of the lateral distance between the vehi-cle’s centre and the lane centre. Figure 19 shows the resultsof an ANOVA indicating that the lateral deviation effect showsno signi fi cant changes among the three conditions (F = P > fi cantly (F = P > D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Figure 20:
Lateral deviation effects: (a) on the fi rst easy road and (b) on the second challenging road. Figure 21:
Brake response times on the three roads.
Figure 20 shows lateral deviations on the fi rst easy road andsecond challenging road. According to the results of a repeatedmeasures one-way ANOVA for the two roads, there were no sig-ni fi cant differences among the no-task baseline, tactile inter-face, and visual and gestural interface (F = P > fi cant differences between opera-tion and no operation (F = P > = P > = P > = P > fi cult roads dif-fered signi fi cantly (F = P < Among the 31 tasks shown in Tables 1 and 2, 12 were con-ducted while driving under hazardous road condition with ac-cidents, sharp corners, road construction, cut-ins, and tunnels.Figure 21 shows the mean brake response times for the base-line and two different interfaces. Unlike speed and lateral de-viations, there was a signi fi cant overall effect of the user inter-face on brake response time (F = P < = P < = P < fi cant. The re-sponse time for the tactile interface was 19% longer than that of the baseline; however, the difference between the baseline andthe proposed visual and gestural interface was not signi fi cant(F = P > fi rst easy roadand the second challenging road. According to the results of a re-peated measures one-way ANOVA for the two roads, there wereno signi fi cant differences between the easy and dif fi cult roads(F = P > = P > = P > fi cant differ-ences among the no-task baseline, tactile interface, and visualand gestural interface (F = P < = P < = P < Task completion time was measured from the moment the ver-bal command was fi nished until ‘end’ was reported by the sub-ject. In the tactile interface, the subject reported ‘end’ at themoment when their right hand returned to the steering wheel.There was found to be a signi fi cant effect for the interface type(F = P < fi rst easy roadand the second challenging road. According to the results of a re-peated measures one-way ANOVA for the two roads, there wereno signi fi cant differences between the easy and dif fi cult roads(F = P > fi cantly (F = P < Video sequences storing gaze information collected by the Er-goneers Dikablis Professional TM eye tracking and data acqui-sition system were analysed using D-Lab 3.0 TM data acquisi-tion and analysis software. Several areas of interest (AOIs) canbe identi fi ed in the image shown to the driver. For a selectedtime interval, D-Lab analyses data to calculate the mean glance D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Figure 20 shows lateral deviations on the fi rst easy road andsecond challenging road. According to the results of a repeatedmeasures one-way ANOVA for the two roads, there were no sig-ni fi cant differences among the no-task baseline, tactile inter-face, and visual and gestural interface (F = P > fi cant differences between opera-tion and no operation (F = P > = P > = P > = P > fi cult roads dif-fered signi fi cantly (F = P < Among the 31 tasks shown in Tables 1 and 2, 12 were con-ducted while driving under hazardous road condition with ac-cidents, sharp corners, road construction, cut-ins, and tunnels.Figure 21 shows the mean brake response times for the base-line and two different interfaces. Unlike speed and lateral de-viations, there was a signi fi cant overall effect of the user inter-face on brake response time (F = P < = P < = P < fi cant. The re-sponse time for the tactile interface was 19% longer than that of the baseline; however, the difference between the baseline andthe proposed visual and gestural interface was not signi fi cant(F = P > fi rst easy roadand the second challenging road. According to the results of a re-peated measures one-way ANOVA for the two roads, there wereno signi fi cant differences between the easy and dif fi cult roads(F = P > = P > = P > fi cant differ-ences among the no-task baseline, tactile interface, and visualand gestural interface (F = P < = P < = P < Task completion time was measured from the moment the ver-bal command was fi nished until ‘end’ was reported by the sub-ject. In the tactile interface, the subject reported ‘end’ at themoment when their right hand returned to the steering wheel.There was found to be a signi fi cant effect for the interface type(F = P < fi rst easy roadand the second challenging road. According to the results of a re-peated measures one-way ANOVA for the two roads, there wereno signi fi cant differences between the easy and dif fi cult roads(F = P > fi cantly (F = P < Video sequences storing gaze information collected by the Er-goneers Dikablis Professional TM eye tracking and data acqui-sition system were analysed using D-Lab 3.0 TM data acquisi-tion and analysis software. Several areas of interest (AOIs) canbe identi fi ed in the image shown to the driver. For a selectedtime interval, D-Lab analyses data to calculate the mean glance D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Figure 22:
Brake response times: (a) on the fi rst easy road and (b) on the second challenging road. Figure 23:
Secondary task completion times. duration(s), which re fl ect the average glance duration in the di-rection of an AOI, the glance rate (s − ), which is the glance fre-quency in an AOI, the AOI attention ratio (%), which is the per-centage of glances at the AOI, mean fi xation duration (ms) rep-resenting the length of time that a glance is fi xed on an AOI,and horizontal and vertical eye activities (pixel) correspondingto standard deviations of the pupil in the X - and Y -axis in pix-els. Figure 25 shows an example of eye tracking results. Althoughthe driver gazes at the front fi eld of view, it cannot be unam-biguously said that the are attentive to the road. Since the menuis displayed on a HUD, it is natural that our interface shows ahigher AOI attention ratio, higher mean fi xation duration, andlower eye activities than when using the conventional tactile in-terface. Here, we present and compare the AOI attention ratiosand vertical eye activities of the baseline and two interfaces inorder to evaluate eye movements for later interface design. Notethat the data on eye movements were collected for only 16 par-ticipants for roads 1 and 2. ! AOI Attention Ratio: As shown in Fig. 26a, the attention ra-tio for the forward fi eld of view was 86.06% (SD = = = fi cant main effect was de-tected for interface type (F = P < = P < fi cant (F = P < = P = > ! Mean Fixation Duration: Figure 26b shows that the mean fi x-ation duration was 238 ms (SD = = = = P < = P < = P < fi cant. The mean fi x-ation duration for the visual and gestural interface was 8%longer than that of the tactile interface. The difference be-tween the baseline and the proposed visual and gestural in-terface was not signi fi cant (F
1, 15 = P > ! Horizontal Eye Activity: As shown in Fig. 26c, there was asigni fi cant overall effect of user interface on horizontal eyeactivity (F = P < fi cant(F = P < fi cant (F = P > ! Vertical Eye Activity: Figure 26d shows that there was also asigni fi cant overall effect of user interface on vertical eye ac-tivity (F = P < = P < = P < fi cant. The difference be-tween the tactile interface and the proposed visual and ges-tural interface was not signi fi cant (F = P > Figure 27 shows variations in head yaw, pitch, and roll for thesubjects during task execution. The black line indicates the headmovements when using the tactile interface, and the red lineindicates the head movements with the new visual and gestu-ral interface. The new interface involved almost no head move-ment, whereas the tactile interface yielded signi fi cant headmovement. Figure 28 shows a box plot of the maximum valuesfor head yaw, pitch, and roll during task execution. The mean D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
1, 15 = P > ! Horizontal Eye Activity: As shown in Fig. 26c, there was asigni fi cant overall effect of user interface on horizontal eyeactivity (F = P < fi cant(F = P < fi cant (F = P > ! Vertical Eye Activity: Figure 26d shows that there was also asigni fi cant overall effect of user interface on vertical eye ac-tivity (F = P < = P < = P < fi cant. The difference be-tween the tactile interface and the proposed visual and ges-tural interface was not signi fi cant (F = P > Figure 27 shows variations in head yaw, pitch, and roll for thesubjects during task execution. The black line indicates the headmovements when using the tactile interface, and the red lineindicates the head movements with the new visual and gestu-ral interface. The new interface involved almost no head move-ment, whereas the tactile interface yielded signi fi cant headmovement. Figure 28 shows a box plot of the maximum valuesfor head yaw, pitch, and roll during task execution. The mean D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Figure 24:
Secondary task completion times: (a) on the fi rst easy road and (b) on the second challenging road. Figure 25:
Eye tracking results: (a) eye fi xation and (b) areas of interest. head yaw when using the tactile interface was − ◦ (SD = ◦ (SD = = P < − ◦ (SD = − ◦ (SD = = P < fi cant (F = P > To determine the effect of task execution while driving, we as-sessed the level of dif fi culty for dependent variables, i.e. main-taining speed and lane, secondary task performance, settingcon fi rmation, and looking forward while operating the device.The level of dif fi culty experienced by the drivers for each vari-able was evaluated based on a fi ve-point Likert scale, where1 = very dif fi cult, 2 = dif fi cult, 3 = neutral, 4 = easy, and5 = very easy. A Wilcoxon signed-rank test was used for sub-jective evaluation through a questionnaire. As shown in Fig. 29,the test results indicated that the visual and gestural inter-face was more desirable than the tactile interface. In addition,Wilcoxon signed-rank tests determined that there was a statis- tically signi fi cant media increase in maintaining speed ( W = P < W = P < fi rmation ( W = P < W = P < W = P <
6. Discussion
Visual distraction usually results in the driver taking their eyesoff the road. HUD interfaces are designed to keep the eyes ofthe driver focused on the road ahead; however, visual distractionmay still occur when users visually attend to the displayed in-formation and miss cues important to the driving task. As tradi-tional eyes-off-road measures are likely not well-suited for dis-traction due to HUDs, distraction needs to be measured in otherways. According to (Pampel & Gabbard, 2017), situation aware-ness, driver performance and other subjective measures can beaffected by visual distraction; thus, they can be used as alterna-tives to traditional visual measures. One way to measure situ-ation awareness is to probe whether the driver has recognizedhazards, such as the lead car braking or an accident ahead. Wemeasured the driver’s braking response time for sudden dan-gers. Regarding driver performance, visual distraction can affect D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Visual distraction usually results in the driver taking their eyesoff the road. HUD interfaces are designed to keep the eyes ofthe driver focused on the road ahead; however, visual distractionmay still occur when users visually attend to the displayed in-formation and miss cues important to the driving task. As tradi-tional eyes-off-road measures are likely not well-suited for dis-traction due to HUDs, distraction needs to be measured in otherways. According to (Pampel & Gabbard, 2017), situation aware-ness, driver performance and other subjective measures can beaffected by visual distraction; thus, they can be used as alterna-tives to traditional visual measures. One way to measure situ-ation awareness is to probe whether the driver has recognizedhazards, such as the lead car braking or an accident ahead. Wemeasured the driver’s braking response time for sudden dan-gers. Regarding driver performance, visual distraction can affect D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Figure 26:
Eye movement measurement for the forward fi eld of view: (a) AOI attention ratio, (b) mean fi xation duration, (c) horizontal eye activity, and (d) vertical eyeactivity. the driver’s ability to keep a constant lane, speed, and gap andcan affect lateral stability. In this study, we measured the main-tenance of the requested speed, lateral deviation and task com-pletion time. We also collected and analysed the subjective eval-uation results.The experimental results for all roads, under normal andhazardous conditions, are presented for the traditional interfaceusing tactile control and a HDD, and for the newly proposed in-terface using on-wheel gesture control and a HUD. ! For the performance of the primary task, i.e. driving, the pro-posed interface yielded a 3% higher speed-keeping rate anda 2% higher lane-keeping rate than the traditional interface.We found no signi fi cant difference between the two differentinterfaces; however, our interface yields a 20% faster brakeresponse time. ! For the performance of secondary tasks, the proposed inter-face achieved a 34% faster task completion time than the tra-ditional interface. These effects were more remarkable un-der hazardous road conditions, which were characterized bysharp curves and emergencies, than under normal road con-ditions. ! The traditional interface requires signi fi cant head move-ments both downwards and to the right in order to press thedevice control keys on the central console, whereas the pro-posed interface requires only fi nger gestures to control eachdevice, allowing the drivers to keep their hands on the steer- ing wheel. The display output can also be viewed on the HUD,requiring little head movement. Regarding eye movement,the new interface obtained an 8% longer dwell time for theforward fi eld of view and an 8% higher average fi xation count;however, we found no signi fi cant difference in the average fi xation time. ! Based on the questionnaire, the newly proposed interfaceobtained a 31% better speed-keeping capability, 64% betterlane-keeping capability, 64% better secondary task perfor-mance, 57% better setting con fi rmation capability, and 109%better forward-looking capability compared with the tradi-tional interface.The experimental results demonstrate that the proposed in-terface reduces visual and biomechanical distractions when lo-cating and controlling devices compared with the traditional in-terface. This is considered to be bene fi cial for safe driving. There are several factors to consider when designing an in-vehicle computing system (Burnett, 2008). The two most-oftenaddressed in studies are the driver’s age and gender. First, withregard to age, young drivers may be particularly skilled in theuse of computing technology relative to the overall population,but are also more prone to risk taking. Also, the lack of driving D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Eye movement measurement for the forward fi eld of view: (a) AOI attention ratio, (b) mean fi xation duration, (c) horizontal eye activity, and (d) vertical eyeactivity. the driver’s ability to keep a constant lane, speed, and gap andcan affect lateral stability. In this study, we measured the main-tenance of the requested speed, lateral deviation and task com-pletion time. We also collected and analysed the subjective eval-uation results.The experimental results for all roads, under normal andhazardous conditions, are presented for the traditional interfaceusing tactile control and a HDD, and for the newly proposed in-terface using on-wheel gesture control and a HUD. ! For the performance of the primary task, i.e. driving, the pro-posed interface yielded a 3% higher speed-keeping rate anda 2% higher lane-keeping rate than the traditional interface.We found no signi fi cant difference between the two differentinterfaces; however, our interface yields a 20% faster brakeresponse time. ! For the performance of secondary tasks, the proposed inter-face achieved a 34% faster task completion time than the tra-ditional interface. These effects were more remarkable un-der hazardous road conditions, which were characterized bysharp curves and emergencies, than under normal road con-ditions. ! The traditional interface requires signi fi cant head move-ments both downwards and to the right in order to press thedevice control keys on the central console, whereas the pro-posed interface requires only fi nger gestures to control eachdevice, allowing the drivers to keep their hands on the steer- ing wheel. The display output can also be viewed on the HUD,requiring little head movement. Regarding eye movement,the new interface obtained an 8% longer dwell time for theforward fi eld of view and an 8% higher average fi xation count;however, we found no signi fi cant difference in the average fi xation time. ! Based on the questionnaire, the newly proposed interfaceobtained a 31% better speed-keeping capability, 64% betterlane-keeping capability, 64% better secondary task perfor-mance, 57% better setting con fi rmation capability, and 109%better forward-looking capability compared with the tradi-tional interface.The experimental results demonstrate that the proposed in-terface reduces visual and biomechanical distractions when lo-cating and controlling devices compared with the traditional in-terface. This is considered to be bene fi cial for safe driving. There are several factors to consider when designing an in-vehicle computing system (Burnett, 2008). The two most-oftenaddressed in studies are the driver’s age and gender. First, withregard to age, young drivers may be particularly skilled in theuse of computing technology relative to the overall population,but are also more prone to risk taking. Also, the lack of driving D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Figure 27:
Head movements in all tasks (black lines for the tactile interface andred lines for the new visual and gestural interface): (a) yaw; (b) pitch; and (c) roll. experience causes a limited ability to divide attention and pri-oritize information sources. Conversely, older drivers often suf-fer from visual impairments that can cause various problemswhen using the in-vehicle display. Studies have shown that olderdrivers may take 1.5 to 2 times longer to read information fromin-vehicle displays than younger drivers. Therefore, the size, lu-minance, contrast, and functional complexity of the informationpresented on the in-vehicle display are particularly importantdesign factors. Second, gender-related effects on road safety at-titudes and driver behaviour have been found in previous stud-ies (Cordellieri et al., 2016). In general, men have higher crashrates than women. This gender difference is most pronouncedbetween the ages of 20–29 years, after which it declines rapidlywith age (SIRC, 2004).According to statistics provided by the Korean National Po-lice Agency (2019), 32 million people held a driving licence in2018, comprising 18 million men (58.2%) and 13 million women(41.8%). The average age of Korean drivers is 40.2 (SD = = = fi rmed, and the deviations are thus not known.This is one of the limitations of our study, and investigating theeffects of age and gender for the population should form the sub-ject of future work. The driving simulator provides a controllable, safe, and cost-effective environment for data collection, making it a useful toolfor studying driving behaviour (Risto & Martens, 2014). In a vir-tual environment, new forms of driver assistance and conductexperiments can be quickly implemented and tested in a con-trolled environment without needing to comply with road safetyregulations. However, the results obtained from simulator re-search need to be evaluated with regard to their generalizabilityto the real world. Driver behaviour data collected in arti fi cial sce-narios under controlled conditions may not necessarily be sim-ilar to driver behaviour in real-world situations. It is thereforenecessary to verify the validity of the simulator results.Several means of validating driving simulator performancehave been identi fi ed. Behavioural validity, which indicates howthe driver’s behaviour changes due to experimental conditionsin a simulator and how this resembles changes in real-worlddriving, is very important (Knapper, Christoph, Hagenzieker, &Brookhuis, 2015). This is also distinguished by absolute and rel-ative validity. Absolute validity is obtained when the numericalvalues measured in the simulator and the comparing methodsare equivalent. Relative validity refers to values changing in thesame direction and with comparable amplitude across methods.Concerning the usefulness of applying a driving simulator asa method for investigating driving, relative validity may, whencarefully applied, suf fi ce for generalizing to real-world driving.In this work, we conducted experiments on a driving sim-ulator using the proposed interface in various scenarios. How-ever, in order to maintain a controlled environment and ensurethe safety of participants, no experiments were conducted usingreal vehicles. It thus remains for future work to conduct experi-ments using a real vehicle and compare the results to verify thebehavioural validity of the driving simulator. Unlike other gestural interfaces, fi nger gestures have limitationsin representing an object and/or an action for a task. The num-bers or kinds of open fi ngers cannot be intuitively associatedwith objects and actions. A visual display is required to com-pensate for this limitation. In this study, a HUD was chosen overa screen on the central console. Unlike traditional tactile inter-faces that require the driver to look at their fi ngertips while mak-ing a menu selection, the HUD helps to respond quickly by re-ducing gaze movement in an emergency. The proposed new fi n-ger gestures make it easier to handle emergencies when com-pared to normal space gestures, where the hands are not de-tached from the steering wheel.We should note that the fi nger spreading method is basi-cally a number input method, requiring a display to show themenus to be selected. A new type of interface that is less de-pendent on a display, and that does not require the driver totake their hands off the steering wheel, is evidently required. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
Head movements in all tasks (black lines for the tactile interface andred lines for the new visual and gestural interface): (a) yaw; (b) pitch; and (c) roll. experience causes a limited ability to divide attention and pri-oritize information sources. Conversely, older drivers often suf-fer from visual impairments that can cause various problemswhen using the in-vehicle display. Studies have shown that olderdrivers may take 1.5 to 2 times longer to read information fromin-vehicle displays than younger drivers. Therefore, the size, lu-minance, contrast, and functional complexity of the informationpresented on the in-vehicle display are particularly importantdesign factors. Second, gender-related effects on road safety at-titudes and driver behaviour have been found in previous stud-ies (Cordellieri et al., 2016). In general, men have higher crashrates than women. This gender difference is most pronouncedbetween the ages of 20–29 years, after which it declines rapidlywith age (SIRC, 2004).According to statistics provided by the Korean National Po-lice Agency (2019), 32 million people held a driving licence in2018, comprising 18 million men (58.2%) and 13 million women(41.8%). The average age of Korean drivers is 40.2 (SD = = = fi rmed, and the deviations are thus not known.This is one of the limitations of our study, and investigating theeffects of age and gender for the population should form the sub-ject of future work. The driving simulator provides a controllable, safe, and cost-effective environment for data collection, making it a useful toolfor studying driving behaviour (Risto & Martens, 2014). In a vir-tual environment, new forms of driver assistance and conductexperiments can be quickly implemented and tested in a con-trolled environment without needing to comply with road safetyregulations. However, the results obtained from simulator re-search need to be evaluated with regard to their generalizabilityto the real world. Driver behaviour data collected in arti fi cial sce-narios under controlled conditions may not necessarily be sim-ilar to driver behaviour in real-world situations. It is thereforenecessary to verify the validity of the simulator results.Several means of validating driving simulator performancehave been identi fi ed. Behavioural validity, which indicates howthe driver’s behaviour changes due to experimental conditionsin a simulator and how this resembles changes in real-worlddriving, is very important (Knapper, Christoph, Hagenzieker, &Brookhuis, 2015). This is also distinguished by absolute and rel-ative validity. Absolute validity is obtained when the numericalvalues measured in the simulator and the comparing methodsare equivalent. Relative validity refers to values changing in thesame direction and with comparable amplitude across methods.Concerning the usefulness of applying a driving simulator asa method for investigating driving, relative validity may, whencarefully applied, suf fi ce for generalizing to real-world driving.In this work, we conducted experiments on a driving sim-ulator using the proposed interface in various scenarios. How-ever, in order to maintain a controlled environment and ensurethe safety of participants, no experiments were conducted usingreal vehicles. It thus remains for future work to conduct experi-ments using a real vehicle and compare the results to verify thebehavioural validity of the driving simulator. Unlike other gestural interfaces, fi nger gestures have limitationsin representing an object and/or an action for a task. The num-bers or kinds of open fi ngers cannot be intuitively associatedwith objects and actions. A visual display is required to com-pensate for this limitation. In this study, a HUD was chosen overa screen on the central console. Unlike traditional tactile inter-faces that require the driver to look at their fi ngertips while mak-ing a menu selection, the HUD helps to respond quickly by re-ducing gaze movement in an emergency. The proposed new fi n-ger gestures make it easier to handle emergencies when com-pared to normal space gestures, where the hands are not de-tached from the steering wheel.We should note that the fi nger spreading method is basi-cally a number input method, requiring a display to show themenus to be selected. A new type of interface that is less de-pendent on a display, and that does not require the driver totake their hands off the steering wheel, is evidently required. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Figure 28:
Head movement measurement: yaw, pitch, and roll angles.
Figure 29:
Level of dif fi culty in using two different interfaces based on the results of the questionnaire. Moreover, in the present study, a P5 data glove was used for fi n-ger gesture recognition; however, this interface is only suitablefor experiments and not for everyday use by drivers. Thus, a sys-tem needs to be developed that can recognize fi nger gestureswhen data gloves are not worn, the most promising method ofwhich is computer vision technology using gesture recognition,in which robust algorithms have been developed to achieve ahigh recognition rate even under varying degrees of illumina-tion. Finally, because newer vehicles often have additional but-ton controls on the steering wheel, which are used as hotkeysto the central console controls, driver performance in using on-wheel buttons should be investigated and compared with theproposed on-wheel fi nger spreading gesture method.
7. Conclusion
In this study, we developed a user interface based around on-wheel gesture control and a HUD, which is expected to min-imize visual and biomechanical distractions while controllingthe audio and A/C systems on a central console when driving. Based on objective measurements and subjective evaluationsof an experimental simulation, we compared the proposed sys-tem both quantitatively and qualitatively against a traditionalinterface utilizing tactile control and a HDD. Task completionrate, the time required for device control, the drivers’ ability tomaintain their speed and lane, and head and eye movementsrelated to driver distraction were determined through objectivemeasurements. Next, a questionnaire survey of the 15 subjectsused was conducted to obtain subjective evaluations of the twotypes of interfaces. The levels of dif fi culty in terms of driving,performing secondary tasks, and looking forward while execut-ing a task when driving were evaluated for both interfaces usingthe questionnaire. Our results show that the proposed interfacereduces visual and biomechanical distractions for drivers whencompared with the traditional interface. Acknowledgement
This research was supported by Basic Science ResearchProgram through the National Research Foundation of D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
This research was supported by Basic Science ResearchProgram through the National Research Foundation of D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy User interface for in-vehicle systems with on-wheel fi nger spreading gestures and head-up displays Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B03036384).
Con fl ict of interest statement None declared.
References
Angelini, L., Carrino, F., Carrino, S., Caon, M., Khaled, O. A.,Baumgartner, J., Sonderegger, A., Lalanne, D., & Mugellini, E.(2014). Gesturing on the steering wheel: A user-elicited tax-onomy. In
Proceedings of the 6th International Conference on Au-tomotive User Interfaces and Interactive Vehicular Applications ,Seattle, WA, 17–19 September 2014, (pp. 1–8).Angelini, L., Carrino, F., Carrino, S., Caon, M., Lalanne, D., Khaled,O. A., & Mugellini, E. (2013). Opportunistic synergy: A classi- fi er fusion engine for micro-gesture recognition. In Proceed-ings of the 5th International Conference on Automotive User Inter-faces and Interactive Vehicular Applications , Eindhoven, Nether-lands, 28–30 October 2013, (pp. 30–37).Bach, K. M., Jaeger, M. G., Skov, M. B., & Thomassen, N. G. (2008).You can touch, but you can’t look: Interacting with in-vehiclesystems. In
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems , Florence, Italy, April 2008, (pp.1139–1148).Bach, K. M., Jæger, M. G., Skov, M. B., & Thomassen, N. G. (2009).Interacting with in-vehicle systems: Understanding, mea-suring, and evaluating attention. In
Proceedings of the 23rdBritish HCI Group Annual Conference on People and Computers:Celebrating People and Technology , Cambridge, UK, 1–5 Septem-ber 2009, (pp. 453–462).Bar ´on, A., & Green, P. (2006). Safety and usability of speech inter-faces for in-vehicle tasks while driving: A brief literature re-view.
Technical Report: University of Michigan Transportation Re-search Institute (UMTRI) , February 2006.Bellotti, F., De Gloria, A., Montanari, R., Dosio, N., & Morreale, D.(2005). COMUNICAR: Designing a multimedia, context-awarehuman–machine interface for cars.
Cognition, Technology &Work , , 36–45.Burnett, G. (2008). Designing and evaluating in-car user inter-faces. In Zaphris, & C.S. Ang (ed). Human computer interac-tion: Concepts, methodologies, tools, and applications , (pp. 532–551), Hershey, PA: Information Science Reference.Choi, N. C., & Lee, S. H. (2015). Discomfort evaluation of truckingress/egress motions based on biomechanical analysis.
Sensors , (6), 13568–13590.Cordellieri, P., Baralla, F., Ferlazzo, F., Sgalla, R., Piccardi, L., & Gi-annini, A. M. (2016). Gender effects in young road users onroad safety attitudes, behaviors and risk perception. Frontiersin Psychology , , 1412.D ¨oring, T., Kern, D., Marshall, P., Pfeiffer, M., Schoning, J., Gruhn,V., & Schmidt, A. (2011). Gestural interaction on the steer-ing wheel: Reducing the visual demand. In Proceedings of theSIGCHI Conference on Human Factors in Computing Systems , Van-couver, Canada, 7–12 May 2011, (pp. 483–492).Endres, C., Schwartz, T., & M ¨uller, C. A. (2011). Geremin: 2Dmicrogestures for drivers based on electric fi eld sensing.In Proceedings of the 16th International Conference on Intel-ligent User Interfaces , Palo Alto, CA, 13–16 February 2011,(pp. 327–330).Eom, H., & Lee, S. H. (2015). Human-automation interaction de-sign for adaptive cruise control systems of ground vehicles.
Sensors , Journal of Computational Design and En-gineering , (2), 179–188.Gellatly, A. W. (1997). The use of speech recognition technology in auto-motive applications (PhD thesis) . Virginia Polytechnic Instituteand State University, Blacksburg, VA.Gonz´alez, I. E., Wobbrock, J. O., Chau, D. H., Faulring, A., & Myers,B. A. (2007). Eyes on the road, hands on the wheel: Thumb-based interaction techniques for input on steering wheels. In
Proceedings of the Graphics Interface , Montreal, Canada, 28–30May 2007, (pp. 95–102).Ha, H., & Ko, K. (2015). A method for image-based shadow inter-action with virtual objects,
Journal of Computational Design andEngineering , , 26–37.Hong, D., & Woo, W. (2008). Recent research trend of gesture-based user interfaces. Telecommunications Review , , 403–413.Kang, M. S. (2012). Gesture Interaction Design for Cars (M.S. thesis) .Graduate School of Techno Design, Kookmin University.Kiefer, R. J. (1998). De fi ning the ”hud bene fi t time window”. Visionin Vehicles , , 133–142.Klauer, S., Dingus, T., Neale, V., Sudweeks, J., & Ramsey, D. (2006). The impact of driver inattention on near-crash/crash risk: An anal-ysis using the 100-car naturalistic driving study data . Washing-ton, DC, USA, (No. HS-810 594): National Highway Trans-portation Safety Administration.Knapper, A., Christoph, M., Hagenzieker, M., & Brookhuis, K.(2015). Comparing a driving simulator to the real road regard-ing distracted driving speed.
European Journal of Transport andInfrastructure Research , fi leData.do, Accessed 30 May 2020.Koyama, S., Sugiura, Y., Ogata, M., Withana, A., Uema, Y.,Honda, M, Yoshizu, S., Sannomiya, C., Nawa, K., & Inami,M. (2014) Multi-touch steering wheel for in-car tertiary ap-plications using infrared sensors. In Proceedings of the 5thAugmented Human International Conference , March 2014, ArticleNo. 5.Lauber, F., Follmann, A., & Butz, A. (2014). What you see is whatyou touch: Visualizing touch screen interaction in the head-up display. In
Proceedings of the 2014 Conference on DesigningInteractive Systems (pp. 171–180).Lee, S. H., & Ahn, D. R. (2015). Design and veri fi cation of driverinterfaces for adaptive cruise control systems. Journal of Me-chanical Science and Technology , (6), 2451–2460.Lee, H., Jung, M., Lee, K. K., & Lee, S. H. (2017). A 3D human–machine integrated design and analysis framework for squatexercises with a Smith machine. Sensors , (2), 299.Liu, Y. C., & Wen, M. H. (2004). Comparison of head-up display(HUD) vs. head-down display (HDD): Driving performance ofcommercial vehicle operators in Taiwan. International Journalof Human-Computer Studies , (5), 679–697.Lee, S. H., Yoon, S. O., & Shin, J. H. (2015). On-wheel fi nger ges-ture control for in-vehicle systems on central consoles. In Ad-junct Proceedings of the 7th International Conference on Automo-tive User Interfaces and Interactive Vehicular Applications , Not-tingham, UK, 2015.09.01-03, (pp. 94–99).Llaneras, R. E. (2000).
NHTSA driver distraction internet forum: Sum-mary and proceeding , Washington, DC, USA (No. HS-809 204):National Highway Transportation Safety Administration. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy
NHTSA driver distraction internet forum: Sum-mary and proceeding , Washington, DC, USA (No. HS-809 204):National Highway Transportation Safety Administration. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy ournal of Computational Design and Engineering, 2020, 7(6), 700–721 Neale, V. L., Dingus, T. A., Klauer, S. G., Sudweeks, J., & Goodman,M. (2005).
An overview of the 100-car naturalistic study and fi nd-ings . Washington, DC, USA: National Highway TransportationSafety Administration, Paper 05–0400.Nielsen, J. (1995). 10 usability heuristics for user interface design, Nielsen Norman Group , (1).Noy, Y. I., Lemoine, T. L., Klachan, C., & Burns, P. C. (2004). Taskinterruptability and duration as measures of visual distrac-tion. Applied Ergonomics , , 207–213.Pampel, S. M., & Gabbard, J. L. (2017). Measures of visual dis-traction in augmented reality interfaces. In Workshop on Aug-mented Reality for Intelligent Vehicles , September 24, 2017, Old-enburg, Germany.P fl eging, B., Schneegass, S., & Schmidt, A. (2012). Multimodal in-teraction in the car: Combining speech and gestures on thesteering wheel. In Proceedings of the 4th International Conferenceon Automotive User Interfaces and Interactive Vehicular Applica-tions , Portsmouth, NH, 17–19 October 2012, (pp. 155–162).Prinzel, L. J., III, & Risser, M. (2004). Head-up displays and atten-tion capture.
NASA Technical Report NASA/TM-2004-213000 .Ranney, T. A., Garrott, W. R., & Goodman, M. J. (2001). NHTSAdriver distraction research: Past, present, and future (No.2001-06-0177). SAE Technical Paper.Rempel, D., Camilleri, M. J., & Lee, D. L. (2004). The design of handgestures for human–computer interaction: Lessons fromsign language interpreters.
International Journal of Human-Computer Studies , (10), 728–735.Riener, A. (2012). Gestural interaction in vehicular applications. Computer , , 42–47.Risto, M., & Martens, M. H. (2014). Driver headway choice: Acomparison between driving simulator and real-road driv-ing. Transportation Research Part F: Traf fi c Psychology and Be-haviour , , 1–9.Shim, J. S., & Lee, S. H. (2016). A study on tactile and gestural con-trols of driver interfaces for in-vehicle systems. Korean Journalof Computational Design and Engineering , (1), 42–50.Shneiderman, B. (1998). Eight golden rules of interface design. In Designing the user interface , 3rd ed, USA: Addison Wesley.SIRC. (2004) Sex differences in driving and insurance risk: Ananalysis of the social and psychological differences betweenmen and women that are relevant to their driving behaviour.
Social Issues Research Centre . Oxford.Skrypchuk, L., Langdon, P., Sawyer, B. D., & Clarkson, P. J.(2020). Unconstrained design: Improving multitasking within-vehicle information systems through enhanced situationawareness.
Theoretical Issues in Ergonomics Science , (2), 183–219.Sojourner, R. J., & Antin, J. F. (1990). The effects of a simu-lated head-up display speedometer on perceptual task per-formance. Human Factors , (3), 329–339. Son, J., Jang, H., & Choi, Y. (2019). Tangible interface for shapemodeling by block assembly of wirelessly connected blocks. Journal of Computational Design and Engineering , (4), 542–550.Stutts, J. C., Feaganes, J., Rodgman, E., Hamlett, C., Meadows, T.,Reinfurt, D., Gish, K., & Staplin, L. (2003). Distractions in ev-eryday driving . Washington, DC, USA, (No. HS-043 573): AAAFoundation for Traf fi c Safety.Stutts, J. C., Reinfurt, D. W., Staplin, L., & Rodgman, E. A. (2001). The role of driver distraction in traf fi c crashes . Washington, DC,USA: AAA Foundation for Traf fi c Safety.Sun, C., Hu, W., & Xu, D. (2019). Navigation modes, operationmethods, observation scales and background options in UIdesign for high learning performance in VR-based architec-tural applications. Journal of Computational Design and Engi-neering , (2), 189–196.Takahashi, R., Suzuki, H., Chew, J.Y., Ohtake, Y., Nagai, Y.,& Ohtomi, K. (2018). A system for three-dimensionalgaze fi xation analysis using eye tracking glasses. Journal of Computational Design and Engineering , (4),449–457.Tognazzini, B. (2014). First principles of interaction design (re-vised & expanded). AskTog. Available online at https://asktog.com/atc/principles-of-interaction-design/, Accessed30 May 2020.Tsimhoni, O., & Green, P. (2001). Visual demand of driving andthe execution of display-intensive, in-vehicle tasks. In Pro-ceedings of the Human Factors and Ergonomics Society 45th An-nual Meeting
Proceedings of theHuman Factors and Ergonomics Society Annual Meeting , 1–3 Oc-tober 2013, (Vol. 57, No. 1, pp. 1643–1647).Wachs, J. P., K ¨olsch, M., Stern, H., & Edan, Y. (2011). Vision-basedhand-gesture applications.
Communications of the ACM , (2),60–71.Weinberg, G., Harsham, B., & Medenica, Z. (2011). Evaluating theusability of a head-up display for selection from choice listsin cars. In Proceedings of the 3rd International Conference on Auto-motive User Interfaces and Interactive Vehicular Applications (pp.39–46).Werner, S. (2014). The steering wheel as a touch interface: Usingthumb-based gesture interfaces as control inputs while driv-ing. In
Proceedings of the 6th International Conference on Automo-tive User Interfaces and Interactive Vehicular Applications , 17–19September 2014, (pp. 1–4), Seattle, WA. D o w n l oaded f r o m h tt p s :// a c ade m i c . oup . c o m / j c de / a r t i c l e / / / / b y gue s t on F eb r ua r yy