Ali Abdolrahmani
University of Maryland, Baltimore County
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ali Abdolrahmani.
Proceedings of the 13th Web for All Conference on | 2016
Ali Abdolrahmani; Ravi Kuber; Amy Hurst
In this paper, we describe a study specifically focusing on the situationally-induced impairments and disabilities (SIIDs) which individuals who are blind encounter when interacting with mobile devices. We conducted semi-structured interviews with eight legally-blind participants, and presented them with three scenarios to inspire discussion relating to SIIDs. Nine main themes emerged from analysis of the participant interviews, including the challenges faced when using a mobile device one-handed while using a cane to detect obstacles along the intended path, the impact of using a mobile device under inhospitable conditions, and concerns associated with using a mobile device in environments where privacy and safety may be compromised (e.g. when using public transport). These were found to reduce the quality of the subjective interaction experience, and in some cases limiting use of mobile technologies in public venues. Insights from our research can be used to guide the design of future mobile interfaces to better meet the needs of users whose needs are often excluded from the design process.
conference on computers and accessibility | 2014
Samantha McDonald; Joshua Dutterer; Ali Abdolrahmani; Shaun K. Kane; Amy Hurst
In this demonstration, we describe our exploration in making graphic design theory accessible to a visually impaired student with the use of rapid prototyping tools. We created over 10 novel aids with the use of a laser cutter and 3D printer to demonstrate tangible examples of color theory, type face, web page layouts, and web design. These tactile aids were inexpensive and fabricated in a relatively small amount of time, suggesting the feasibility of our approach. The participants feedback concluded an increased understanding of the class material and confirmed the potential of tactile aids and rapid prototyping in an educational environment
human factors in computing systems | 2017
Ali Abdolrahmani; William Easley; Michele A. Williams; Stacy M. Branham; Amy Hurst
Prevention of errors has been an orienting goal within the field of Human-Computer Interaction since its inception, with particular focus on minimizing human errors through appropriate technology design. However, there has been relatively little exploration into how designers can best support users of technologies that will inevitably make errors. We present a mixed-methods study in the domain of navigation technology for visually impaired individuals. We examined how users respond to device errors made in realistic scenarios of use. Contrary to conventional wisdom that usable systems must be error-free, we found that 42% of errors were acceptable to users. Acceptance of errors depends on error type, building feature, and environmental context. Further, even when a technical error is acceptable to the user, the misguided social responses of others nearby can negatively impact user experience. We conclude with design recommendations that embrace errors while also supporting user management of errors in technical systems.
human factors in computing systems | 2016
William Easley; Michele A. Williams; Ali Abdolrahmani; Caroline Galbraith; Stacy M. Branham; Amy Hurst; Shaun K. Kane
The ability for one to navigate independently can be essential to maintaining employment, taking care of oneself, and leading a fulfilling life. However, for people who are blind, navigation-related tasks in public spaces--such as locating an empty seat--can be difficult without appropriate tools, training, or social context. We present a study of social norms in environments with predominately blind navigators and discuss how these may differ from what sighted people expect. Based on these findings, we advocate for the creation of more pervasive technologies to help bridge the gap between social norms when people with visual impairments are in predominately sighted environments.
conference on computers and accessibility | 2017
Stacy M. Branham; Ali Abdolrahmani; William Easley; Morgan Klaus Scheuerman; Erick Ronquillo; Amy Hurst
For decades, researchers have investigated and developed technologies that support independent navigation for people who are blind. This has led to systems that primarily aid in detecting routes, landmarks, and building features. However, there has been relatively little inquiry regarding how technologies might support navigation around and in the presence of other people. What visual information, if any, do blind navigators wish they had about people on their path? To address this question, we surveyed 58 blind and low vision individuals and interviewed 10 blind individuals. We discovered our participants were interested in using visual information about others to increase their physical safety. For example, they wanted to know if a passerby was holding a weapon, if a presumed official had a proper uniform or badge, and how to describe visual aspects of a criminal to law enforcement. This paper presents one of the only reports documenting accessibility challenges related to physical safety posed by others, including how future assistive tools can empower individuals with disabilities to more actively increase their sense of safety. We call this emerging area Personal Safety Management and contribute a set of four broad subareas that deserve further exploration by researchers and designers working within the blind and broader disabilities communities.
human factors in computing systems | 2017
Morgan Klaus Scheuerman; William Easley; Ali Abdolrahmani; Amy Hurst; Stacy M. Branham
Independent navigation is important to individuals who are blind and visually impaired (VI). Researchers have long explored how blind and VI people navigate to inform the design of more useful, accessible wayfinding devices. However, there has been little research on the role language plays in providing effective Text-to-Speech directions for this population. In this paper, we investigate the language and cues expressed in written navigational directions exchanged between blind and VI members of a Yahoo! Group mailing list. Through qualitative analysis, we unpack the types of and frequencies of information exchanged, including how distances are represented, how direction is indicated, and what landmarks are referenced. We notably found that written directions often included warnings about when a navigator may have gone too far, which alternate routes are easier to navigate, and how welcoming and accessible destinations might be for people with disabilities.
conference on computers and accessibility | 2016
Ali Abdolrahmani; William Easley; Michele A. Williams; Erick Ronquillo; Stacy M. Branham; Tiffany Chen; Amy Hurst
Large indoor spaces continue to pose challenges to independent navigation for people who are blind. Unfortunately, assistive technologies designed to support indoor navigation frequently make errors that are technically difficult or impossible to eliminate. We conducted a study to explore whether there are strategic ways designers can minimize the impact of inevitable errors on user experience. This paper summarizes an online survey of 41 blind individuals regarding their projected acceptance to three types of errors expected of these devices. We found that some errors were more acceptable than others. Factors that impacted results included the error type and the social/environmental setting.
conference on computers and accessibility | 2016
Ali Abdolrahmani; Ravi Kuber
As users become increasingly more reliant on online resources to satisfy their information needs, care is needed to ensure that these resources are credible in nature, especially if a decision is to be taken based upon the information accessed. The credibility of a web site is known to be heavily influenced by its visual appearance. However, for individuals who are blind, challenges are often faced accessing these visual cues when using assistive technologies. In this paper, we describe an observational study to examine the strategies and workarounds developed by individuals who are blind to perform credibility assessments. These are compared with those used by sighted users. Findings from the study have highlighted the relationship between accessibility and credibility. The features used to form assessments non-visually have also been identified. Insights from the study can be used to support the design of highly credible interfaces for blind screen reader users.
conference on computers and accessibility | 2018
Ali Abdolrahmani; Ravi Kuber; Stacy M. Branham
Voice-activated personal assistants (VAPAs)--like Amazon Echo or Apple Siri--offer considerable promise to individuals who are blind due to widespread adoption of these non-visual interaction platforms. However, studies have yet to focus on the ways in which these technologies are used by individuals who are blind, along with whether barriers are encountered during the process of interaction. To address this gap, we interviewed fourteen legally-blind adults with experience of home and/or mobile-based VAPAs. While participants appreciated the access VAPAs provided to inaccessible applications and services, they faced challenges relating to the input, responses from VAPAs, and control of information presented. User behavior varied depending on the situation or context of the interaction. Implications for design are suggested to support inclusivity when interacting with VAPAs. These include accounting for privacy and situational factors in design, examining ways to support concerns over trust, and synchronizing presentation of visual and non-visual cues.
ACM Sigaccess Accessibility and Computing | 2017
Ali Abdolrahmani
Independent navigation is an important aspect in the lives of individuals who are blind. While orientation and mobility training often equip these individuals with skills for independent living, recent advances in navigation technologies could be used to augment the subjective quality of their navigation experience. In addition to outdoor navigation, blind individuals need to effectively navigate indoor spaces in different social contexts and environments. Moreover, they may need to identify the presence of known and unknown individuals in their vicinity in order to support social interactions (i.e. cueing the user to greet known individuals by name). Combining wearable solutions with computer vision and facial recognition (FR) technologies has the potential to help in this regard. However, only limited research has examined these technologies to inform the future design of assistive aids such that they meet the real world needs of this population. Research in this proposal aims to use a human-centric approach to understand both the technical and social aspects of FR technology and its integration with navigation aids. An optimal design framework will be sought in order to improve computer-vision-based navigation solutions for the blind community.