Thomas M. Gable
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas M. Gable.
automotive user interfaces and interactive vehicular applications | 2014
Keenan R. May; Thomas M. Gable; Bruce N. Walker
Multimodal and visual-only air gesture systems for navigating menus in the vehicle were developed and compared to a conventional direct touch system in a driving simulator using various distraction metrics. Participants using the multimodal air gesture system exhibited safer secondary task dwell patterns, but took longer to complete tasks and reported higher workload compared to the touch system.
International Journal of Human-computer Interaction | 2015
Myounghoon Jeon; Thomas M. Gable; Benjamin K. Davison; Michael A. Nees; Jeff Wilson; Bruce N. Walker
Auditory display research for driving has mainly examined a limited range of tasks (e.g., collision warnings, cell phone tasks). In contrast, the goal of this project was to evaluate the effectiveness of enhanced auditory menu cues in a simulated driving context. The advanced auditory cues of “spearcons” (compressed speech cues) and “spindex” (a speech-based index cue) were predicted to improve both menu navigation and driving. Two experiments used a dual task paradigm in which users selected songs on the vehicle’s infotainment system. In Experiment 1, 24 undergraduates played a simple, perceptual-motor ball-catching game (the primary task; a surrogate for driving), and navigated through an alphabetized list of 150 song titles—rendered as an auditory menu—as a secondary task. The menu was presented either in the typical visual-only manner, or enhanced with text-to-speech (TTS), or TTS plus one of three types of additional auditory cues. In Experiment 2, 34 undergraduates conducted the same secondary task while driving in a simulator. In both experiments, performance on both the primary task (success rate of the game or driving performance) and the secondary task (menu search time) was better with the auditory menus than with no sound. Perceived workload scores as well as user preferences favored the enhanced auditory cue types. These results show that adding audio, and enhanced auditory cues in particular, can allow a driver to operate the menus of in-vehicle technologies more efficiently while driving more safely. Results are discussed with multiple resources theory.
Applied Ergonomics | 2015
Myounghoon Jeon; Bruce N. Walker; Thomas M. Gable
Research has suggested that interaction with an in-vehicle software agent can improve a drivers psychological state and increase road safety. The present study explored the possibility of using an in-vehicle software agent to mitigate effects of driver anger on driving behavior. After either anger or neutral mood induction, 60 undergraduates drove in a simulator with two types of agent intervention. Results showed that both speech-based agents not only enhance driver situation awareness and driving performance, but also reduce their anger level and perceived workload. Regression models show that a drivers anger influences driving performance measures, mediated by situation awareness. The practical implications include design guidelines for the design of social interaction with in-vehicle software agents.
automotive user interfaces and interactive vehicular applications | 2014
Thomas M. Gable; Keenan R. May; Bruce N. Walker
Recent technological advances have led to the ability to reliably track the human body at low cost, allowing for the proliferation of Air Gesture (AG) interfaces. It has been proposed that AGs may be a safe and effective way to interact with in-vehicle technologies. However, designers do not presently have a well developed/adapted set of heuristics, which they can consult to ensure their designs are suitable for the driving environment. This paper aims to address this by discussing how a popular set of human-computer interaction heuristics can be applied to AGs in the vehicle.
automotive user interfaces and interactive vehicular applications | 2013
Richard Swette; Keenan R. May; Thomas M. Gable; Bruce N. Walker
Three novel interfaces for navigating a hierarchical menu while driving were experimentally evaluated. Prototypes utilized redundant visual and auditory feedback (multimodal), and were compared to a conventional direct touch interface. All three multimodal prototypes employed an external touchpad separate from the infotainment display in order to afford simple eyes-free gesturing. Participants performed a basic driving task while concurrently using these prototypes to perform menu selections. Mean lateral lane deviation, eye movements, secondary task speed, and self-reported workload were assessed for each condition. Of all conditions, swiping the touchpad to move one-by-one through menu items yielded significantly smaller lane deviations than direct touch. In addition, in the serial swipe condition, the same time spent looking at the prototype was distributed over a longer interaction time. The remaining multimodal conditions allowed users to feel around a pie or list menu to find touchpad zones corresponding to menu items, allowing for either exploratory browsing or shortcuts. This approach, called GRUV, was ineffective compared to serial swiping and direct touch, possibly due its uninterruptable interaction pattern and overall novelty. The proposed explanation for the performance benefits of the serial swiping condition was that it afforded flexible sub tasking and incremental progress, in addition to providing multimodal output.
automotive user interfaces and interactive vehicular applications | 2015
Thomas M. Gable; Andrew L. Kun; Bruce N. Walker; Riley J. Winton
Historically detection of workload has been done through the use of subjective measures but new demands in research are driving interests to objective and immediate physiological measures. This paper is a look at some initial data from a study comparing the commonly used measure of heart rate (HR) to that of pupil size (PS), in detecting changes in workload in a driving environment. Participants drove a simulator and performed the n-back task while having their HR and PS tracked. Initial results show both measures have the expected trends; but significant differences between n-back levels found in the PS data suggest that PS may be more sensitive to differences in workload. The results and considerations are discussed.
automotive user interfaces and interactive vehicular applications | 2013
Thomas M. Gable; Bruce N. Walker; Haifa R. Moses; Ramitha D. Chitloor
In-vehicle technologies can create dangerous situations through driver distraction. In recent years, research has focused on driver distraction through communications technologies, but others, such as scrolling through a list of songs or names, can also carry high attention demands. Research has revealed that the use of advanced auditory cues for in-vehicle technology interaction can decrease cognitive demand and improve driver performance when compared to a visual-only system. This paper discusses research investigating the effects of applying advanced auditory cues to a search task on a mobile device while driving, particularly focusing on visual fixation. Twenty-six undergraduates performed a search task through a list of 150 songs on a cell phone while performing the lane change task, wearing eye-tracking glasses. Eye-tracking data, performance, workload, and preferences for six conditions were collected. Compared to no sound, visual fixation time on driving and preferences were found to be significantly higher for the advanced auditory cue of spindex. Results suggest more visual availability for driving when the spindex cue is applied to the search task and provides further evidence that these advanced auditory cues can lessen distraction from driving while using mobile devices to search for items in lists.
Archive | 2017
Andreas Löcken; Shadan Sadeghian Borojeni; Heiko Müller; Thomas M. Gable; Stefano Triberti; Cyriel Diels; Christiane Glatz; Ignacio Alvarez; Lewis L. Chuang; Susanne Boll
Informing a driver of a vehicle’s changing state and environment is a major challenge that grows with the introduction of in-vehicle assistant and infotainment systems. Even in the age of automation, the human will need to be in the loop for monitoring, taking over control, or making decisions. In these cases, poorly designed systems could lead to needless attentional demands imparted on the driver, taking it away from the primary driving task. Existing systems are offering simple and often unspecific alerts, leaving the human with the demanding task of identifying, localizing, and understanding the problem. Ideally, such systems should communicate information in a way that conveys its relevance and urgency. Specifically, information useful to promote driver safety should be conveyed as effective calls for action, while information not pertaining to safety (therefore less important) should be conveyed in ways that do not jeopardize driver attention. Adaptive ambient displays and peripheral interactions have the potential to provide superior solutions and could serve to unobtrusively present information, to shift the driver’s attention according to changing task demands, or enable a driver to react without losing the focus on the primary task. In order to build a common understanding across researchers and practitioners from different fields, we held a “Workshop on Adaptive Ambient In-Vehicle Displays and Interactions” at the AutomotiveUI‘15 conference. In this chapter, we discuss the outcomes of this workshop, provide examples of possible applications now or in the future and conclude with challenges in developing or using adaptive ambient interactions.
automotive user interfaces and interactive vehicular applications | 2017
Brittany E. Noah; Philipp Wintersberger; Alexander G. Mirnig; Shailie Thakkar; Fei Yan; Thomas M. Gable; Johannes Kraus; Roderick McCall
This workshop intends to address contemporary issues surrounding trust in technology in the challenging and constantly changing context of automated vehicles. In particular, this workshop focuses on two main aspects: (1) appropriate definitions of trust and associated concepts for the automated driving context, especially regarding trust calibration in individual capabilities versus overall trust; (2) appropriate measures (qualitative and quantitative) to quantify trust in automated vehicles and in-vehicle interfaces. The workshop proceeds on the basis of a keynote and accepted position papers by participants as a basis for the focused breakout sessions. The outcome of the workshop will become the basis for a subsequent joint publication of organizers and participants discussing the issues (1) and (2).
automotive user interfaces and interactive vehicular applications | 2017
Brittany E. Noah; Thomas M. Gable; Shao-Yu Chen; Shruti Singh; Bruce N. Walker
In-vehicle automated safety features aim to increase safety; however, they are not always perfect. When automated systems fail, they leave the driver unprepared to recover quickly and safely. Reliability displays, informing the driver of the systems confidence in itself, could help keep drivers aware of the automations status and increase safety when failures occur. This study proposed two metrics for displaying this information to the driver: automation reliability (AR), a system-centric metric; and required driver engagement (RDE), a human-centric metric. Visual displays were developed in three levels for each metric: quantitative, qualitative, and representational. Participants sorted these displays for level of AR and RDE, and rated their preference. Results showed AR displays were matched more accurately than RDE displays. Preference ratings were not significantly different between the display types. These results are discussed in terms of how these displays can be designed to ensure drivers understand vehicle automation.