Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chieko Asakawa is active.

Publication


Featured researches published by Chieko Asakawa.


human computer interaction with mobile devices and services | 2016

NavCog: a navigational cognitive assistant for the blind

Cole Gleason; Chengxiong Ruan; Kris M. Kitani; Hironobu Takagi; Chieko Asakawa

Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces. We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.


user interface software and technology | 2016

VizLens: A Robust and Interactive Screen Reader for Interfaces in the Real World

Anhong Guo; Xiang 'Anthony' Chen; Haoran Qi; Samuel White; Suman Ghosh; Chieko Asakawa; Jeffrey P. Bigham

The world is full of physical interfaces that are inaccessible to blind people, from microwaves and information kiosks to thermostats and checkout terminals. Blind people cannot independently use such devices without at least first learning their layout, and usually only after labeling them with sighted assistance. We introduce VizLens - an accessible mobile application and supporting backend that can robustly and interactively help blind people use nearly any interface they encounter. VizLens users capture a photo of an inaccessible interface and send it to multiple crowd workers, who work in parallel to quickly label and describe elements of the interface to make subsequent computer vision easier. The VizLens application helps users recapture the interface in the field of the camera, and uses computer vision to interactively describe the part of the interface beneath their finger (updating 8 times per second). We show that VizLens provides accurate and usable real-time feedback in a study with 10 blind participants, and our crowdsourcing labeling workflow was fast (8 minutes), accurate (99.7%), and cheap (


human factors in computing systems | 2017

People with Visual Impairment Training Personal Object Recognizers: Feasibility and Challenges

Hernisa Kacorri; Kris M. Kitani; Jeffrey P. Bigham; Chieko Asakawa

1.15). We then explore extensions of VizLens that allow it to (i) adapt to state changes in dynamic interfaces, (ii) combine crowd labeling with OCR technology to handle dynamic displays, and (iii) benefit from head-mounted cameras. VizLens robustly solves a long-standing challenge in accessibility by deeply integrating crowdsourcing and computer vision, and foreshadows a future of increasingly powerful interactive applications that would be currently impossible with either alone.


Proceedings of the 13th Web for All Conference on | 2016

NavCog: turn-by-turn smartphone navigation assistant for people with visual impairments or blindness

Cole Gleason; Kris M. Kitani; Hironobu Takagi; Chieko Asakawa

Blind people often need to identify objects around them, from packages of food to items of clothing. Automatic object recognition continues to provide limited assistance in such tasks because models tend to be trained on images taken by sighted people with different background clutter, scale, viewpoints, occlusion, and image quality than in photos taken by blind users. We explore personal object recognizers, where visually impaired people train a mobile application with a few snapshots of objects of interest and provide custom labels. We adopt transfer learning with a deep learning system for user-defined multi-label k-instance classification. Experiments with blind participants demonstrate the feasibility of our approach, which reaches accuracies over 90% for some participants. We analyze user data and feedback to explore effects of sample size, photo-quality variance, and object shape; and contrast models trained on photos by blind participants to those by sighted participants and generic recognizers.


conference on computers and accessibility | 2016

Supporting Orientation of People with Visual Impairment: Analysis of Large Scale Usage Data

Hernisa Kacorri; Sergio Mascetti; Andrea Gerino; Hironobu Takagi; Chieko Asakawa

NavCog is a novel smartphone navigation system for people with visual impairments or blindness, capable of assisting the users during autonomous mobility in complex and unfamiliar indoor/outdoor environments. The accurate localization achieved by NavCog is used for precise turn-by-turn way-finding assistance as the first step, but the ultimate goal is to present a variety of location based information to the user, such as points of interest gathered from social media and online geografic information services.


international conference on image processing | 2015

Recognizing hand-object interactions in wearable camera videos

Tatsuya Ishihara; Kris M. Kitani; Wei-Chiu Ma; Hironobu Takagi; Chieko Asakawa

In the field of assistive technology, large scale user studies are hindered by the fact that potential participants are geographically sparse and longitudinal studies are often time consuming. In this contribution, we rely on remote usage data to perform large scale and long duration behavior analysis on users of iMove, a mobile app that supports the orientation of people with visual impairments. Exploratory analysis highlights popular functions, common configuration settings, and usage patterns among iMove users. The study shows stark differences between users accessing the app through VoiceOver and other users, who tend to use the app more scarcely and sporadically.Analysis through clustering of VoiceOver iMove user interactions discovers four distinct user groups: 1) users interested in surrounding points of interest, 2) users keeping the app active for long sessions while in movement, 3) users interacting in short bursts to inquire about current location, and 4) users querying in bursts about surrounding points of interest and addresses. Our analysis provides insights into iMoves user base and can inform decisions for tailoring the app to diverse user groups, developing future improvements of the software, or guiding the design process of similar assistive tools.


conference on computers and accessibility | 2017

NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment

Daisuke Sato; Uran Oh; Kakuya Naito; Hironobu Takagi; Kris M. Kitani; Chieko Asakawa

Wearable computing technologies are advancing rapidly and enabling users to easily record daily activities for applications such as life-logging or health monitoring. Recognizing hand and object interactions in these videos will help broaden application domains, but recognizing such interactions automatically remains a difficult task. Activity recognition from the first-person point-of-view is difficult because the video includes constant motion, cluttered backgrounds, and sudden changes of scenery. Recognizing hand-related activities is particularly challenging due to the many temporal and spatial variations induced by hand interactions. We present a novel approach to recognize hand-object interactions by extracting both local motion features representing the subtle movements of the hands and global hand shape features to capture grasp types. We validate our approach on multiple egocentric action datasets and show that state-of-the-art performance can be achieved by considering both local motion and global appearance information.


Proceedings of the 14th Web for All Conference on The Future of Accessible Work | 2017

Achieving Practical and Accurate Indoor Navigation for People with Visual Impairments

Masayuki Murata; Cole Gleason; Erin Brady; Hironobu Takagi; Kris M. Kitani; Chieko Asakawa

Navigating in unfamiliar environments is challenging for most people, especially for individuals with visual impairments. While many personal navigation tools have been proposed to enable in- dependent indoor navigation, they have insufficient accuracy (e.g., 5-10 m), do not provide semantic features about surroundings (e.g., doorways, shops, etc.), and may require specialized devices to function. Moreover, the deployment of many systems is often only evaluated in constrained scenarios, which may not precisely reflect the performance in the real world. Therefore, we have de- signed and implemented NavCog3, a smartphone-based indoor navigation assistant that has been evaluated in a 21,000 m2 shop- ping mall. In addition to turn-by-turn instructions, it provides in- formation on landmarks (e.g., tactile paving) and points of interests nearby. We first conducted a controlled study with 10 visually im- paired users to assess localization accuracy and the perceived use- fulness of semantic features. To understand the usability of the app in a real-world setting, we then conducted another study with 43 participants with visual impairments where they could freely nav- igate in the shopping mall using NavCog3. Our findings suggest that NavCog3 can open a new opportunity for users with visual im- pairments to independently find and visit large and complex places with confidence.


conference on computers and accessibility | 2015

Exploring Interface Design for Independent Navigation by People with Visual Impairments

Erin L. Brady; Daisuke Sato; Chengxiong Ruan; Hironobu Takagi; Chieko Asakawa

Methods that provide accurate navigation assistance to people with visual impairments often rely on instrumenting the environment with specialized hardware infrastructure. In particular, approaches that use sensor networks of Bluetooth Low Energy (BLE) beacons have been shown to achieve precise localization and accurate guidance while the structural modifications to the environment are kept at minimum. To install navigation infrastructure, however, a number of complex and time-critical activities must be performed. The BLE beacons need to be positioned correctly and samples of Bluetooth signal need to be collected across the whole environment. These tasks are performed by trained personnel and entail costs proportional to the size of the environment that needs to be instrumented. To reduce the instrumentation costs while maintaining a high accuracy, we improve over a traditional regression-based localization approach by introducing a novel, graph-based localization method using Pedestrian Dead Reckoning (PDR) and particle filter. We then study how the number and density of beacons and Bluetooth samples impact the balance between localization accuracy and set-up cost of the navigation environment. Studies with users show the impact that the increased accuracy has on the usability of our navigation application for the visually impaired.


conference on computers and accessibility | 2017

Virtual Navigation for Blind People: Building Sequential Representations of the Real-World

João Guerreiro; Kris M. Kitani; Chieko Asakawa

Most user studies of navigation applications for people with visual impairments have been limited by existing localization technologies, and appropriate instruction types and information needs have been determined through interviews. Using Wizard-of- Oz navigation interfaces, we explored how people with visual impairments respond to different instruction intervals, precision, output modalities, and landmark use during in situ navigation tasks. We present the results of an experimental study with nine people with visual impairments, and provide direction and open questions for future work on adaptive navigation interfaces.

Collaboration


Dive into the Chieko Asakawa's collaboration.

Top Co-Authors

Avatar

Kris M. Kitani

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Cole Gleason

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

João Guerreiro

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eshed Ohn-Bar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge