Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anna C. Cavender is active.

Publication


Featured researches published by Anna C. Cavender.


human factors in computing systems | 2009

Evaluating existing audio CAPTCHAs and an interface optimized for non-visual use

Jeffrey P. Bigham; Anna C. Cavender

Audio CAPTCHAs were introduced as an accessible alternative for those unable to use the more common visual CAPTCHAs, but anecdotal accounts have suggested that they may be more difficult to solve. This paper demonstrates in a large study of more than 150 participants that existing audio CAPTCHAs are clearly more difficult and time-consuming to complete as compared to visual CAPTCHAs for both blind and sighted users. In order to address this concern, we developed and evaluated a new interface for solving CAPTCHAs optimized for non-visual use that can be added in-place to existing audio CAPTCHAs. In a subsequent study, the optimized interface increased the success rate of blind participants by 59% on audio CAPTCHAs, illustrating a broadly applicable principle of accessible design: the most usable audio interfaces are often not direct translations of existing visual interfaces.


conference on computers and accessibility | 2007

WebinSitu: a comparative analysis of blind and sighted browsing behavior

Jeffrey P. Bigham; Anna C. Cavender; Jeremy T. Brudvik; Jacob O. Wobbrock; Richard E. Ladner

Web browsing is inefficient for blind web users because of persistent accessibility problems, but the extent of these problems and their practical effects from the perspective of the user has not been sufficiently examined. We conducted a study in situ to investigate the accessibility of the web as experienced by web users. This remote study used an advanced web proxy that leverages AJAX technology to record both the pages viewed and the actions taken by users on the web pages that they visited. Our study was conducted remotely over the period of one week, and our participants used the assistive technology and software to which they were already accustomed and had already configured according to preference. These advantages allowed us to aggregate observations of many users and to explore the practical effects on and coping strategies employed by our blind participants. Our study reflects web accessibility from the perspective of web users and describes quantitative differences in the browsing behavior of blind and sighted web users.


human factors in computing systems | 2005

EyeDraw: enabling children with severe motor impairments to draw with their eyes

Anthony J. Hornof; Anna C. Cavender

EyeDraw is a software program that, when run on a computer with an eye tracking device, enables children with severe motor disabilities to draw pictures by just moving their eyes. This paper discusses the motivation for building the software, how the program works, the iterative development of two versions of the software, user testing of the two versions by people with and without disabilities, and modifications to the software based on user testing. Feedback from both children and adults with disabilities, and from their caregivers, was especially helpful in the design process. The project identifies challenges that are unique to controlling a computer with the eyes, and unique to writing software for children with severe motor impairments.


conference on computers and accessibility | 2006

MobileASL:: intelligibility of sign language video as constrained by mobile phone technology

Anna C. Cavender; Richard E. Ladner; Eve A. Riskin

For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even todays best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and one user study with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eyetracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. These results show promise for realtime access to the current cell phone network through signlanguage-specific encoding techniques.


conference on computers and accessibility | 2004

Eyedraw: a system for drawing pictures with eye movements

Anthony J. Hornof; Anna C. Cavender; Rob Hoselton

This paper describes the design and development of EyeDraw, a software program that will enable children with severe mobility impairments to use an eye tracker to draw pictures with their eyes so that they can have the same creative developmental experiences as nondisabled children. EyeDraw incorporates computer-control and software application advances that address the special needs of people with motor impairments, with emphasis on the needs of children. The contributions of the project include (a) a new technique for using the eyes to control the computer when accomplishing a spatial task, (b) the crafting of task-relevant functionality to support this new technique in its application to drawing pictures, and (c) a user-tested implementation of the idea within a working computer program. User testing with nondisabled users suggests that we have designed and built an eye-cursor and eye drawing control system that can be used by almost anyone with normal control of their eyes. The core technique will be generally useful for a range of computer control tasks such as selecting a group of icons on the desktop by drawing a box around them.


conference on computers and accessibility | 2009

ClassInFocus: enabling improved visual attention strategies for deaf and hard of hearing students

Anna C. Cavender; Jeffrey P. Bigham; Richard E. Ladner

Deaf and hard of hearing students must juggle their visual attention in current classroom settings. Managing many visual sources of information (instructor, interpreter or captions, slides or whiteboard, classmates, and personal notes) can be a challenge. ClassInFocus automatically notifies students of classroom changes, such as slide changes or new speakers, helping them employ more beneficial observing strategies. A user study of notification techniques shows that students who liked the notifications were more likely to visually utilize them to improve performance.


conference on computers and accessibility | 2008

PowerUp: an accessible virtual world

Shari Trewin; Vicki L. Hanson; Mark R. Laff; Anna C. Cavender

PowerUp is a multi-player virtual world educational game with a broad set of accessibility features built in. This paper considers what features are necessary to make virtual worlds usable by individuals with a range of perceptual, physical, and cognitive disabilities. The accessibility features were included in the PowerUp game and validated, to date, with blind and partially sighted users. These features include in-world navigation and orientation tools, font customization, self-voicing text-to-speech output, and keyboard-only and mouse-only navigation. We discuss user requirements gathering, the validation study, and further work needed.


conference on computers and accessibility | 2007

Variable frame rate for low power mobile sign language communication

Neva Cherniavsky; Anna C. Cavender; Richard E. Ladner; Eve A. Riskin

The MobileASL project aims to increase accessibility by enabling Deaf people to communicate over video cell phones in their native language, American Sign Language (ASL). Real-time video over cell phones can be a computationally intensive task that quickly drains the battery, rendering the cell phone useless. Properties of conversational sign language allow us to save power and bits: namely, lower frame rates are possible when one person is not signing due to turn-taking, and signing can potentially employ a lower frame rate than fingerspelling. We conduct a user study with native signers to examine the intelligibility of varying the frame rate based on activity in the video. We then describe several methods for automatically determining the activity of signing or not signing from the video stream in real-time. Our results show that varying the frame rate during turn-taking is a good way to save power without sacrificing intelligibility, and that automatic activity analysis is feasible.


ACM Transactions on Accessible Computing | 2009

Exploring Visual and Motor Accessibility in Navigating a Virtual World

Shari Trewin; Mark R. Laff; Vicki L. Hanson; Anna C. Cavender

For many millions of users, 3D virtual worlds provide an engaging, immersive experience heightened by a synergistic combination of visual realism with dynamic control of the user’s movement within the virtual world. For individuals with visual or dexterity impairments, however, one or both of those synergistic elements are impacted, reducing the usability and therefore the utility of the 3D virtual world. This article considers what features are necessary to make virtual worlds usable by such individuals. Empirical work has been based on a multiplayer 3D virtual world game called PowerUp, to which we have built in an extensive set of accessibility features. These features include in-world navigation and orientation tools, font customization, self-voicing text-to-speech output, key remapping options, and keyboard-only and mouse-only navigation. Through empirical work with legally blind teenagers and adults with cerebral palsy, these features have been refined and validated. Whereas accessibility support for users with visual impairment often revolves around keyboard navigation, these studies emphasized the need to support visual aspects of pointing device actions too. Other notable findings include use of speech to supplement sound effects for novice users, and, for those with cerebral palsy, a general preference to use a pointing device to look around the world, rather than keys or on-screen buttons. The PowerUp accessibility features provide a core level of accessibility for the user groups studied.


Disability and Rehabilitation: Assistive Technology | 2008

MobileASL: Intelligibility of sign language video over mobile phones

Anna C. Cavender; Rahul Vanam; Dane K. Barney; Richard E. Ladner; Eve A. Riskin

For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even todays best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.

Collaboration


Dive into the Anna C. Cavender's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Vicki L. Hanson

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eve A. Riskin

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge