Conversations On Multimodal Input Design With Older Adults
CConversations On Multimodal InputDesign With Older Adults
Adam S. Williams
Colorado State UniversityFort Collins, CO 80525, [email protected]
Sarah Coler
Columbine HealthFort Collins, CO 80525, [email protected]
Francisco Ortega
Colorado State UniversityFort Collins, CO 80525, [email protected]
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).Copyright held by the owner/author(s).
CHI’20, , April 25–30, 2020, Honolulu, HI, USAACM 978-1-4503-6819-3/20/04.https://doi.org/10.1145/3334480.XXXXXXX
Abstract
Multimodal input systems can help bridge the wide range ofphysical abilities found in older generations. After conduct-ing a survey/interview session with a group of older adultsat an assisted living community we believe that gesture andspeech should be the main inputs for that system. Addition-ally, collaborative design of new systems was found to beuseful for facilitating conversations around input design withthis demographic.
Author Keywords
Human Computer Interaction; Older Adults; Interaction De-sign; Inputs
CCS Concepts • Human-centered computing → Human computer inter-action (HCI);
Interaction design;
Accessibility;
Introduction
Input technologies can help enable seniors to maintain theirindependence into later years by offloading some of theneeds of daily activities, or by reducing the cognitive loadneeded for those activities [8]. There are however differ-ences in how older adults and younger adults interact withand perceive controls [2]. Some of these differences in-clude perception limitations or inexperience with new tech-nologies. a r X i v : . [ c s . H C ] A ug adly, when interactions with a system are poor, olderadults often assign personal blame instead of consideringthat the interface may not be properly designed for them [2].When input technologies are not accessible it promotesfeelings of exclusion and loss of control which can con-tribute negatively to oneâ ˘A ´Zs life [8].Gestures have shown promise as an appropriate interactiontechnique for older adults, showing no significant differencein accuracy between older and younger adults [9]. Speechhas also been seen as a preferred input for older adults [1]. Scale Response
Table 1:
Response scale for deviceusage frequency
Scale Response
Table 2:
Response scale for selfassessment ratings
Multimodal interfaces show promise in improving accessi-bility for older adults by providing a more natural and effi-cient interaction space [3]. Multimodal inputs are associatedwith higher user satisfaction, particularly with speech andgestural interfaces (touch) [4]. The benefits of this com-bined modality extend beyond user satisfaction. These in-put streams often contain non-redundant information whichcan lead to better recognizer systems development [6].
Survey
To evaluate the feasibility of multimodal gesture and speech-based inputs we conducted a survey/interview session atan assisted living home. This pilot study is aimed at guid-ing our ideas about the future direction of input devices forthis generation. This survey included questions about de-vice usage frequency and comfort with using devices. Thescales for responses are shown in tables 1 and 2. Therewere 11 participants (age M= 86.1, SD = 10.89). The rangeof ages spanned from 66 to 100 (8 male, 2 female). Previ-ous careers ranged from homemaker to pilot.
Results
Due to small sample sizes, we are treating the responses tothe survey as a framing for the discussion. Thus, in-depth statistical analysis was not performed on the survey results.The results were impacted by age. People over 86 (group2) selected “never use” for all of the device usage ques-tions. People 86 and under (group 1) answered more of theusage questions. Most participants in group 1 report us-ing a smartphone daily 3/4 (group 2 1/7). All phone usagequestions were rated as daily or never. Only 2 participantsreported using a mouse/keyboard daily, both in group 1.Group 1 rated their eyesight as good (M=3.25), group 2rated their eyesight as fair (M=2.83, SD=1.35). Dexterity fol-lowed the same trend. Group 1 rated their dexterity as good(M=4, SD=0), group 2 rated it as poor (M=1.88, SD=1.5).These results are shown in figure 1. Group 1 rated theircomfort with touchscreens as good (M=3, SD=.7). Themouse/keyboard and speech controls were rated as fairlycomfortable both with (M=2, SD=1.41). For all device com-fort questions group 2 rated their comfort as very poor.People said that they wanted to see more inputs that mirrorwhat they currently use. Those being touchscreens. Somepeople felt that technologies are moving too fast. Indicat-ing a preference for keeping inputs consistent in emergingtechnologies. When asked if they would like speech-basedinputs people unanimously said no. When asked if theythought using speech as commands for a system most peo-ple said it would be beneficial. The responses to these simi-lar questions flipped based on the framing of the question.After having difficulties getting input preferences for generalnew technologies we framed the questions around a virtualreality bingo game (VRBG). The participants helped designwhat this game would do. Having helped with design, par-ticipants were more involved in conversations around inputsfor it. Bingo was chosen based on the advice of the activ-ities director for the assisted living community. In their ex-perience, the residents both enjoyed and could relate to theechanics of bingo. This made the conversation relevant tothe participants. Some participants gave more buy-in withthe prospect of VRBG helping them to overcome physicallimitations by scaling bingo cards to work around eyesightor dexterity limitations.Participants wanted to touch the virtual bingo card to placetheir chips. A few people wanted speech-based inputs incombination with touch, but not as a stand-alone input. Par-ticipants said they wanted few controls in this system. Oneparticipant said “simple is safe” which was then repeated byother residents over the course of the conversation.
Figure 1:
Participants self reportedeyesight and manual dexterity bygroup
There was apprehension around participation in the con-versation. Prior to the VRBG segment of the conversation,the thought of technology seemed to be off-putting to someparticipants. This was more pronounced in the participantsabove 86. Some participants, though eager to fill out thequestionnaires and talkative during the initial session, be-come very quiet during the discussion about inputs. Therewas some level of guilt reported by several participantssaying that they “didn’t want to ruin the results” indicatingthat their self-perceived lack of experience with technologywould be harmful to the survey. One participant went sofar as to say they did not want to participate at all becausethey were “computer illiterate” and did not want to ruin theresults. That participant later joined into the conversation.This looks similar to the self-blame found by Cui et al. [2].Some other quotes from participants are found in table 3.
Discussion
The results from the survey and the interviews providesome direction for future input design. The interviews aroundinput design also give insights on conducting meaning-ful conversations with this demographic. It was difficult toframe what we were asking when talking about what inputs they would like to see in future devices. There was a largedisconnect between computers, phones, headsets, or anyother type of technology. Different framings of the samequestion yielded drastically different answers.When considering hand-based inputs, gestures or other-wise, it’s important to consider the level of mobility andhand usage older adults have. All of our participants re-ported having some level of manual dexterity decrease,with 4/11 indicating it was severe. This decrease in manualdexterity has been linked to decreased selection task per-formance [5]. The addition of speech-based inputs wouldmake for a more flexible system, able to better cater to indi-vidual needs than touch or gesture alone.Going into a conversation with specific questions was diffi-cult. There is a high potential for the participants to not con-tribute. Establishing a dialog around a common notion washelpful. The common notion in this instance was the VRBG.Having a member of the actives staff involved was impact-ful, the familiar face helped ease some of the anxieties inthe room. That staff member was also able to help re-framequestions in a way that participants were more open to ormore able to answer.We recommend conducting interviews with this popula-tion across multiple periods. Establishing a rapport wouldhelp with participant involvement in the conversation. Mostresidents mentioned guilt about “ruining results.” We be-lieve that having multiple sessions will help lessen this guiltwhich will help get at more meaningful conversations.
Future Work
This lab plans on developing the VRBG as a tool to get en-gagement in conversations about input device design fromthis population. We plan to have reoccurring meetings tohelp enable the participatory design of the system. Whilehe end result may not be hugely impactful to this audience.The effect of having a stream of suggestions and iterativedesign based on the needs of this population will be.
Conclusion
Touchscreens and more intuitive Natural User Interfacescan enable older adults to join the digital world [7]. It is im-portant that inputs are designed so that current older adults,and generations set to enter that space can remain in thevirtual world. “you don’t want meto participate,I’m computer illiterate”“simple is safe”“technologymoves too fast”
Table 3:
Quotes from participants
Gesture and speech-based inputs are the best future direc-tion for interaction design with older adults. Touch-basedinputs were favored by most participants. We believe thatgestures will begin to replace touchscreen interactionsand can utilize many of the same features as touch. Whilethere was some hesitation around speech inputs, we be-lieve that as technology improves speech will become moreaccepted. When considering the next generation of olderadults, the pervasive use of speech based home assis-tants might change this preference from touch. Multimodalsystems that can utilize multiple input streams, in particu-lar gestures and speech, can provide a robust interactionspace. One capable of overcoming the individual level dif-ficulties incurred while aging by providing alternative inputoptions. With the wide range of abilities found in older gen-erations, this flexibility is critical to widespread accessibility.
REFERENCES [1] Barry Brumitt and Jonathan J Cadiz. 2001. " Let ThereBe Light": Examining Interfaces for Homes of theFuture.. In
INTERACT , Vol. 1. 375–82.[2] Young J Chun and Patrick E Patterson. 2012. Ausability gap between older adults and younger adultson interface design of an Internet-based telemedicine system.
Work
41, Supplement 1 (2012), 349–352.[3] Alessia D’Andrea, Arianna D’Ulizia, Fernando Ferri,and Patrizia Grifoni. 2009. A multimodal pervasiveframework for ambient assisted living. In
Proceedingsof the 2nd International Conference on PErvasiveTechnologies Related to Assistive Environments . 1–8.[4] Cui Jian, Hui Shi, Nadine Sasse, Carsten Rachuy,Frank Schafmeister, Holger Schmidt, and Nicole vonSteinbüchel. 2013. Modality preference in multimodalinteraction for elderly persons. In
International JointConference on Biomedical Engineering Systems andTechnologies . Springer, 378–393.[5] Zhao Xia Jin, Tom Plocher, and Liana Kiff. 2007.Touch screen user interfaces for older adults: buttonsize and spacing. In
International Conference onUniversal Access in Human-Computer Interaction .Springer, 933–941.[6] D B Koons, C J Sparrell, and others. 1993. Integratingsimultaneous input from speech, gaze, and handgestures.
MIT Press: Menlo Park, CA (1993).[7] Fernando Miguel Pinto, João Freitas, and Miguel SalesDias. 2012. Living Home Center-a personal assistantwith multimodal interaction for elderly and mobilityimpaired e-inclusion. (2012).[8] Judith Rodin. 1986. Aging and health: Effects of thesense of control.
Science