Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rachel Gage is active.

Publication


Featured researches published by Rachel Gage.


PLOS ONE | 2013

Indoor Navigation by People with Visual Impairment Using a Digital Sign System

Gordon E. Legge; Paul J. Beckmann; Bosco S. Tjan; Gary Havey; Kevin Kramer; David Rolkosky; Rachel Gage; Muzi Chen; Sravan Puchakayala; Aravindhan Rangarajan

There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.


Investigative Ophthalmology & Visual Science | 2013

Recognition of Ramps and Steps by People with Low Vision

Tiana M. Bochsler; Gordon E. Legge; Rachel Gage; Christopher S. Kallie

PURPOSE Detection and recognition of ramps and steps are important for the safe mobility of people with low vision. Our primary goal was to assess the impact of viewing conditions and environmental factors on the recognition of these targets by people with low vision. A secondary goal was to determine if results from our previous studies of normally sighted subjects, wearing acuity-reducing goggles, would generalize to low vision. METHODS Sixteen subjects with heterogeneous forms of low vision participated-acuities from approximately 20/200 to 20/2000. they viewed a sidewalk interrupted by one of five targets: a single step up or down, a ramp up or down, or a flat continuation of the sidewalk. Subjects reported which of the five targets was shown, and percent correct was computed. The effects of viewing distance, target-background contrast, lighting arrangement, and subject locomotion were investigated. Performance was compared with a group of normally sighted subjects who viewed the targets through acuity-reducing goggles. RESULTS Recognition performance was significantly better at shorter distances and after locomotion (compared with purely stationary viewing). The effects of lighting arrangement and target-background contrast were weaker than hypothesized. Visibility of the targets varied, with the step up being more visible than the step down. CONCLUSIONS The empirical results provide insight into factors affecting the visibility of ramps and steps for people with low vision. The effects of distance, target type, and locomotion were qualitatively similar for low vision and normal vision with artificial acuity reduction. However, the effects of lighting arrangement and background contrast were only significant for subjects with normal vision.


Journal of Vision | 2010

Visual accessibility of ramps and steps.

Gordon E. Legge; Deyue Yu; Christopher S. Kallie; Tiana M. Bochsler; Rachel Gage

The visual accessibility of a space refers to the effectiveness with which vision can be used to travel safely through the space. For people with low vision, the detection of steps and ramps is an important component of visual accessibility. We used ramps and steps as visual targets to examine the interacting effects of lighting, object geometry, contrast, viewing distance, and spatial resolution. Wooden staging was used to construct a sidewalk with transitions to ramps or steps. Forty-eight normally sighted subjects viewed the sidewalk monocularly through acuity-reducing goggles and made recognition judgments about the presence of the ramps or steps. The effects of variation in lighting were milder than expected. Performance declined for the largest viewing distance but exhibited a surprising reversal for nearer viewing. Of relevance to pedestrian safety, the step up was more visible than the step down. We developed a probabilistic cue model to explain the pattern of target confusions. Cues determined by discontinuities in the edge contours of the sidewalk at the transition to the targets were vulnerable to changes in viewing conditions. Cues associated with the height in the picture plane of the targets were more robust.


Optometry and Vision Science | 2012

Seeing steps and ramps with simulated low acuity: Impact of texture and locomotion

Tiana M. Bochsler; Gordon E. Legge; Christopher S. Kallie; Rachel Gage

Purpose. Detecting and recognizing steps and ramps is an important component of the visual accessibility of public spaces for people with impaired vision. The present study, which is part of a larger program of research on visual accessibility, investigated the impact of two factors that may facilitate the recognition of steps and ramps during low-acuity viewing. Visual texture on the ground plane is an environmental factor that improves judgments of surface distance and slant. Locomotion (walking) is common during observations of a layout, and may generate visual motion cues that enhance the recognition of steps and ramps. Methods. In two experiments, normally sighted subjects viewed the targets monocularly through blur goggles that reduced acuity to either approximately 20/150 (mild blur) or 20/880 Snellen (severe blur). The subjects judged whether a step, ramp, or neither was present ahead on a sidewalk. In the texture experiment, subjects viewed steps and ramps on a surface with a coarse black-and-white checkerboard pattern. In the locomotion experiment, subjects walked along the sidewalk toward the target before making judgments. Results. Surprisingly, performance was lower with the textured surface than with a uniform surface, perhaps because the texture masked visual cues necessary for target recognition. Subjects performed better in walking trials than in stationary trials, possibly because they were able to take advantage of visual cues that were only present during motion. Conclusions. We conclude that under conditions of simulated low acuity, large high-contrast texture elements can hinder the recognition of steps and ramps, whereas locomotion enhances recognition.


PLOS ONE | 2016

Indoor Spatial Updating with Reduced Visual Information

Gordon E. Legge; Rachel Gage; Yihwa Baek; Tiana M. Bochsler

Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.


Journal of Vision | 2015

Comparing the visual spans for faces and letters.

Yingchen He; Jennifer M. Scholz; Rachel Gage; Christopher S. Kallie; Tingting Liu; Gordon E. Legge

The visual span-the number of adjacent text letters that can be reliably recognized on one fixation-has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition.


Investigative Ophthalmology & Visual Science | 2016

Indoor Spatial Updating With Impaired Vision

Gordon E. Legge; Christina Granquist; Yihwa Baek; Rachel Gage

Purpose Spatial updating is the ability to keep track of position and orientation while moving through an environment. We asked how normally sighted and visually impaired subjects compare in spatial updating and in estimating room dimensions. Methods Groups of 32 normally sighted, 16 low-vision, and 16 blind subjects estimated the dimensions of six rectangular rooms. Updating was assessed by guiding the subjects along three-segment paths in the rooms. At the end of each path, they estimated the distance and direction to the starting location, and to a designated target. Spatial updating was tested in five conditions ranging from free viewing to full auditory and visual deprivation. Results The normally sighted and low-vision groups did not differ in their accuracy for judging room dimensions. Correlations between estimated size and physical size were high. Accuracy of low-vision performance was not correlated with acuity, contrast sensitivity, or field status. Accuracy was lower for the blind subjects. The three groups were very similar in spatial-updating performance, and exhibited only weak dependence on the nature of the viewing conditions. Conclusions People with a wide range of low-vision conditions are able to judge room dimensions as accurately as people with normal vision. Blind subjects have difficulty in judging the dimensions of quiet rooms, but some information is available from echolocation. Vision status has little impact on performance in simple spatial updating; proprioceptive and vestibular cues are sufficient.


Archive | 2016

Indoor Spatial Updating with Impaired Vision-Human Performance Data for 32 Normally Sighted Subjects, 16 Low Vision Subjects and 16 Blind Subjects

Gordon E. Legge; Christina Granquist; Yihwa Baek; Rachel Gage

Data for each participant’s estimation of room dimensions are found in LeggeEtAl_RoomSizeEstimates.csv, the spatial updating data in LeggeEtAl_SpatialUpdatingEstimates.csv, and demographic data for each participant is in LeggeEtAl_SubjectDemographics.csv. Detailed information for each data file can be found in LeggeEtAl_Documention.txt. Please see the referenced article for more information about the methods.


Investigative Ophthalmology & Visual Science | 2009

Visual Accessibility of Ramps and Steps

Deyue Yu; Rachel Gage; Christopher S. Kallie; Gordon E. Legge


Optometry and Vision Science | 2018

How People with Low Vision Achieve Magnification in Digital Reading

Christina Granquist; Yueh-Hsun Wu; Rachel Gage; Michael D. Crossland; Gordon E. Legge

Collaboration


Dive into the Rachel Gage's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deyue Yu

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Yihwa Baek

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Muzi Chen

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Yingchen He

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge