Cynthia L. Bennett
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cynthia L. Bennett.
conference on computers and accessibility | 2013
Kyle Rector; Cynthia L. Bennett; Julie A. Kientz
People who are blind or low vision may have a harder time participating in exercise classes due to inaccessibility, travel difficulties, or lack of experience. Exergames can encourage exercise at home and help lower the barrier to trying new activities, but there are often accessibility issues since they rely on visual feedback to help align body positions. To address this, we developed Eyes-Free Yoga, an exergame using the Microsoft Kinect that acts as a yoga instructor, teaches six yoga poses, and has customized auditory-only feedback based on skeletal tracking. We ran a controlled study with 16 people who are blind or low vision to evaluate the feasibility and feedback of Eyes-Free Yoga. We found participants enjoyed the game, and the extra auditory feedback helped their understanding of each pose. The findings of this work have implications for improving auditory-only feedback and on the design of exergames using depth cameras.
conference on computers and accessibility | 2013
Kotaro Hara; Shiri Azenkot; Megan Campbell; Cynthia L. Bennett; Vicki Le; Sean Pannella; Robert Moore; Kelly Minckler; Rochelle H. Ng; Jon E. Froehlich
Low-vision and blind bus riders often rely on known physical landmarks to help locate and verify bus stop locations (e.g., by searching for a shelter, bench, newspaper bin). However, there are currently few, if any, methods to determine this information a priori via computational tools or services. In this paper, we introduce and evaluate a new scalable method for collecting bus stop location and landmark descriptions by combining online crowdsourcing and Google Street View (GSV). We conduct and report on three studies in particular: (i) a formative interview study of 18 people with visual impairments to inform the design of our crowdsourcing tool; (ii) a comparative study examining differences between physical bus stop audit data and audits conducted virtually with GSV; and (iii) an online study of 153 crowd workers on Amazon Mechanical Turk to examine the feasibility of crowdsourcing bus stop audits using our custom tool with GSV. Our findings reemphasize the importance of landmarks in non-visual navigation, demonstrate that GSV is a viable bus stop audit dataset, and show that minimally trained crowd workers can find and identify bus stop landmarks with 82.5% accuracy across 150 bus stop locations (87.3% with simple quality control).
conference on computers and accessibility | 2014
Lauren R. Milne; Cynthia L. Bennett; Richard E. Ladner; Shiri Azenkot
There are many educational smartphone games for children, but few are accessible to blind children. We present BraillePlay, a suite of accessible games for smartphones that teach Braille character encodings to promote Braille literacy. The BraillePlay games are based on VBraille, a method for displaying Braille characters on a smartphone. BraillePlay includes four games of varying levels of difficulty: VBReader and VBWriter simulate Braille flashcards, and VBHangman and VBGhost incorporate Braille character identification and recall into word games. We evaluated BraillePlay with a longitudinal study in the wild with eight blind children. Through logged usage data and extensive interviews, we found that all but one participant were able to play the games independently and found them enjoyable. We also found evidence that some children learned Braille concepts. We distill implications for the design of games for blind children and discuss lessons learned.
conference on computers and accessibility | 2014
Catherine M. Baker; Lauren R. Milne; Jeffrey Scofield; Cynthia L. Bennett; Richard E. Ladner
Textbook figures are often converted into a tactile format for access by blind students. These figures are not truly accessible unless the text within the figures is also made accessible. A common solution to access text in a tactile image is to use embossed Braille. We have developed an alternative to Braille that uses QR codes for students who want tactile graphics, but prefer the text in figures be spoken, rather than in Braille. Tactile Graphics with a Voice (TGV) allows text within tactile graphics to be accessible by using a talking QR code reader app on a smartphone. To evaluate TGV, we performed a longitudinal study where ten blind and low vision participants were asked to complete tasks using three alternative picture taking guidance techniques: 1) no guidance, 2) verbal guidance, and 3) finger pointing guidance. Our results show that TGV is an effective way to access text in tactile graphics, especially for those blind users who are not fluent in Braille. In addition, guidance preferences varied with each of the guidance techniques being preferred by at least one participant.
human factors in computing systems | 2017
Haley MacLeod; Cynthia L. Bennett; Meredith Ringel Morris; Edward Cutrell
Research advancements allow computational systems to automatically caption social media images. Often, these captions are evaluated with sighted humans using the image as a reference. Here, we explore how blind and visually impaired people experience these captions in two studies about social media images. Using a contextual inquiry approach (n=6 blind/visually impaired), we found that blind people place a lot of trust in automatically generated captions, filling in details to resolve differences between an images context and an incongruent caption. We built on this in-person study with a second, larger online experiment (n=100 blind/visually impaired) to investigate the role of phrasing in encouraging trust or skepticism in captions. We found that captions emphasizing the probability of error, rather than correctness, encouraged people to attribute incongruence to an incorrect caption, rather than missing details. Where existing research has focused on encouraging trust in intelligent systems, we conclude by challenging this assumption and consider the benefits of encouraging appropriate skepticism.
conference on computers and accessibility | 2014
Megan Campbell; Cynthia L. Bennett; Caitlin Bonnar; Alan Borning
Locating bus stops, particularly in unfamiliar areas, can present challenges to people who are blind or low vision. At the same time, new information technology such as smart phones and mobile devices have enabled them to undertake a much greater range of activities with increased independence. We focus on the intersection of these issues. We developed and deployed StopInfo, a system for public transit riders that provides very detailed information about bus stops with the goal of helping riders find and verify bus stop locations. We augmented internal information from a major transit agency in the Seattle area with information entered by the community, primarily as they waited at these stops. Additionally, we conducted a five week field study with six blind and low vision participants to gauge usage patterns and determine values related to independent travel. We found that StopInfo was received positively and is generally usable. Furthermore, the system supports tenets of independence; participants took public transit trips that they might not have attempted otherwise. Lastly, an audit of bus stops in three Seattle neighborhoods found that information from both the transit agency and the community was accurate.
ACM Transactions on Accessible Computing | 2016
Catherine M. Baker; Lauren R. Milne; Ryan Drapeau; Jeffrey Scofield; Cynthia L. Bennett; Richard E. Ladner
We discuss the development of Tactile Graphics with a Voice (TGV), a system used to access label information in tactile graphics using QR codes. Blind students often rely on tactile graphics to access textbook images. Many textbook images have a large number of text labels that need to be made accessible. In order to do so, we propose TGV, which uses QR codes to replace the text, as an alternative to Braille. The codes are read with a smartphone application. We evaluated the system with a longitudinal study where 10 blind and low-vision participants completed tasks using three different modes on the smartphone application: (1) no guidance, (2) verbal guidance, and (3) finger-pointing guidance. Our results show that TGV is an effective way to access text in tactile graphics, especially for those blind users who are not fluent in Braille. We also found that preferences varied greatly across the modes, indicating that future work should support multiple modes. We expand upon the algorithms we used to implement the finger pointing, algorithms to automatically place QR codes on documents. We also discuss work we have started on creating a Google Glass version of the application.
conference on computers and accessibility | 2016
Cynthia L. Bennett; Kristen Shinohara; Brianna Blaser; Andrew R. Davidson; Kat M. Steele
Although a critical step in the technology design process, ideation is often not accessible for people with disabilities. We present findings from a design workshop facilitated to brainstorm accessible ideation methods. Groups, mostly engineers, ideated on a design challenge and documented access barriers encountered by participants with disabilities. They then ideated and prototyped potential solutions for decreasing access barriers. We offer suggestions for more accessible communication and ideation on a design team and insights from using a workshop as a site for rethinking ideation.
human factors in computing systems | 2018
Martez E. Mott; E Jane; Cynthia L. Bennett; Edward Cutrell; Meredith Ringel Morris
We present the results of an exploration to understand the accessibility of smartphone photography for people with motor impairments. We surveyed forty-six people and interviewed twelve people about capturing, editing, and sharing photographs on smartphones. We found that people with motor impairments encounter many challenges with smartphone photography, resulting in users capturing fewer photographs than they would like. Participants described various strategies they used to overcome challenges in order to capture a quality photograph. We also found that photograph quality plays a large role in deciding which photographs users share and how often they share, with most participants rating their photographs as average or poor quality compared to photos shared on their social networks. Additionally, we created design probes of two novel photography interfaces and received feedback from our interview participants about their usefulness and functionality. Based on our findings, we propose design recommendations for how to improve the accessibility of mobile photoware for people with motor impairments.
human factors in computing systems | 2018
Cynthia L. Bennett; Jane E; Martez E. Mott; Edward Cutrell; Meredith Ringel Morris
We contribute a qualitative investigation of how teens with visual impairments (VIP) access smartphone photography, from the time they take photos through editing and sharing them on social media. We observed that they largely want to engage with photos visually, similarly to their sighted peers, and have developed strategies around photo capture, editing, sharing, and consumption that attempt to mitigate usability limitations of current photography and social media apps. We demonstrate the need for more work examining how young people with low vision engage with smartphone photography and social media, as they are heavy users of such technologies and have challenges distinct from their totally blind counterparts. We conclude with design considerations to alleviate the usability barriers we uncovered and for making smartphone photography and social media more accessible and relevant for VIPs.