Journal on Multimodal User Interfaces | 2019

Multimodal interaction in automotive applications

 
 
 

Abstract


With the smartphone, cloud computing, and wireless networking becoming ubiquitous, pervasive distributed computing is approaching reality. Multiple modes are already available in today’s automotive dashboards, including haptic controllers, touch screens, 3D gestures, speech, audio, secondary displays, and gaze, among others. And increasingly, the internet of things is weaving its way into many aspects of our daily lives, encouraging users to project these expectations for natural interaction towards all kind of digital interfaces, including cars. However, these expectations are not fully met by car manufacturers, though the clear trend is for manufacturers to add technology to cars that deliver on their vision and promise of a safer, more enjoyable drive and multimodal interaction technology is a key part of delivering this vision. In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs. For instance, such assistants could naturally answer questions about the car and help schedule service when needed. It could find the preferred gas station along the route, or even better—plan a stop and ensure arriving in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and, instead, includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Finally, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive. This is why some of the biggest innovations in today’s cars expand how drivers and their passengers are able to access information and control non-driving functions in their vehicle. In this special issue, the authors depict the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow’s cars. We present 7 selected and peer reviewed submissions that investigate this from various viewpoints incorporating various modalities. Touch input has been available in cars for some time. Designers have had freedom, for example, in arranging the controls, thus allowing them to be tailored to specific tasks. In contrast to touch, in-air gestures are generally not restricted to the interaction of the driver with a predefined surface but may happen wherever the driver’s hands are needed. Jason Sterkenburg, Steven Landry and Myounghoon Jeon explore the capabilities of this modality combined with auditory displays and eye tracking in Design and Evaluation of Auditory-supported Air Gesture Controls in Vehicles. Francesco Biondi, Douglas Getty, Joel Cooper, David Strayer’s manuscript focuses on Investigating the Role of Design Components on Workload and Usability of InVehicle Auditory-Vocal Systems. Similarly Michael Braun, Nora Broy, Bastian Pfleging and Florian Alt look at how such voice-based systems can be efficiently augmented by visualizations in Visualizing Natural Language Interaction for Conversational In-Vehicle Information Systems to Minimize Driver Distraction. In a similar regard, it is important to consider how efficient switching between the available modalities in the car is promoted. Such a study focusing on touch and speech is presented by Florian Roider, Sonja Rümelin, Bastian Pfleging and Tom Gross in Investigating the Effects of Modality Switches on Driver Distraction and Interaction Efficiency in the Car. * Dirk Schnelle-Walka [email protected]

Volume 13
Pages 53-54
DOI 10.1007/s12193-019-00295-x
Language English
Journal Journal on Multimodal User Interfaces

Full Text