Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher S. Kallie is active.

Publication


Featured researches published by Christopher S. Kallie.


Investigative Ophthalmology & Visual Science | 2008

Nonlinear mixed-effects modeling of MNREAD data.

Sing-Hang Cheung; Christopher S. Kallie; Gordon E. Legge; Allen M. Y. Cheong

PURPOSE It is often difficult to estimate parameters from individual clinical data because of noisy or incomplete measurements. Nonlinear mixed-effects (NLME) modeling provides a statistical framework for analyzing population parameters and the associated variations, even when individual data sets are incomplete. The authors demonstrate the application of NLME by analyzing data from the MNREAD, a continuous-text reading-acuity chart. METHODS The authors analyzed MNREAD data (measurements of reading speed vs. print size) for two groups: 42 adult observers with normal vision and 14 patients with age-related macular degeneration (AMD). Truncated sets of MNREAD data were generated from the individual observers with normal vision. The MNREAD data were fitted with a two-limb function and an exponential-decay function using an individual curve-fitting approach and an NLME modeling approach. RESULTS The exponential-decay function provided slightly better fits than the two-limb function. When the parameter estimates from the truncated data sets were used to predict the missing data, NLME modeling gave better predictions than individual fitting. NLME modeling gave reasonable parameter estimates for AMD patients even when individual fitting returned unrealistic estimates. CONCLUSIONS These analyses showed that (1) an exponential-decay function fits MNREAD data very well, (2) NLME modeling provides a statistical framework for analyzing MNREAD data, and (3) NLME analysis provides a way of estimating MNREAD parameters even for incomplete data sets. The present results demonstrate the potential value of NLME modeling for clinical vision data.


Arquivos Brasileiros De Oftalmologia | 2005

Elaboração e validação de tabela MNREAD para o idioma português

Celina Tamaki Monteiro de Castro; Christopher S. Kallie; Solange Rios Salomão

PURPOSE: To create and to validate a version of the Minnesota Low Vision Reading Test (MNREAD) acuity chart for the Portuguese language. METHODS: The Minnesota Low Vision Reading Test acuity chart contain 19 sentences (logMAR 0.5 to 1.3) with 60 characters printed on three lines. All the sentences must have the same length with simple vocabulary. A total of 110 sentences were generated. The sentences were presented to 36 subjects (20 adults and 16 children) and mistakes and reading time were marked. 38 sentences were selected for a prototype (MNREAD-P). Sentences with extreme high and low mean reading time, large standard deviation, and with persistent mistakes by subjects were excluded. Validation: Twenty subjects with normal vision (logMAR 0 or 20/20 or better, with best refractive correction) were tested with the MNREAD-P and read a passage of text, representing normal, day-to-day reading. Reading speeds in words per minute were recorded for both the MNREAD and the text passage. RESULTS: Sentences in the MNREAD Portuguese chart are sufficiently consistent to provide reliable measures of reading abilities. Reading speeds for the passage (logMar = 0.6) were 197.8 words/minute and the maximum reading speeds calculated by the MNREAD-P were 200.1 words/minute. The correlation between the two measures was r = 0.82. CONCLUSION: The MNREAD-P was tested on normal vision subjects and the results were the same from the original Minnesota Low Vision Reading Test. The reading speed measured on the MNREAD-P was statistically equivalent to the reading speed of the passage.


Investigative Ophthalmology & Visual Science | 2013

Recognition of Ramps and Steps by People with Low Vision

Tiana M. Bochsler; Gordon E. Legge; Rachel Gage; Christopher S. Kallie

PURPOSE Detection and recognition of ramps and steps are important for the safe mobility of people with low vision. Our primary goal was to assess the impact of viewing conditions and environmental factors on the recognition of these targets by people with low vision. A secondary goal was to determine if results from our previous studies of normally sighted subjects, wearing acuity-reducing goggles, would generalize to low vision. METHODS Sixteen subjects with heterogeneous forms of low vision participated-acuities from approximately 20/200 to 20/2000. they viewed a sidewalk interrupted by one of five targets: a single step up or down, a ramp up or down, or a flat continuation of the sidewalk. Subjects reported which of the five targets was shown, and percent correct was computed. The effects of viewing distance, target-background contrast, lighting arrangement, and subject locomotion were investigated. Performance was compared with a group of normally sighted subjects who viewed the targets through acuity-reducing goggles. RESULTS Recognition performance was significantly better at shorter distances and after locomotion (compared with purely stationary viewing). The effects of lighting arrangement and target-background contrast were weaker than hypothesized. Visibility of the targets varied, with the step up being more visible than the step down. CONCLUSIONS The empirical results provide insight into factors affecting the visibility of ramps and steps for people with low vision. The effects of distance, target type, and locomotion were qualitatively similar for low vision and normal vision with artificial acuity reduction. However, the effects of lighting arrangement and background contrast were only significant for subjects with normal vision.


Journal of Vision | 2010

Visual accessibility of ramps and steps.

Gordon E. Legge; Deyue Yu; Christopher S. Kallie; Tiana M. Bochsler; Rachel Gage

The visual accessibility of a space refers to the effectiveness with which vision can be used to travel safely through the space. For people with low vision, the detection of steps and ramps is an important component of visual accessibility. We used ramps and steps as visual targets to examine the interacting effects of lighting, object geometry, contrast, viewing distance, and spatial resolution. Wooden staging was used to construct a sidewalk with transitions to ramps or steps. Forty-eight normally sighted subjects viewed the sidewalk monocularly through acuity-reducing goggles and made recognition judgments about the presence of the ramps or steps. The effects of variation in lighting were milder than expected. Performance declined for the largest viewing distance but exhibited a surprising reversal for nearer viewing. Of relevance to pedestrian safety, the step up was more visible than the step down. We developed a probabilistic cue model to explain the pattern of target confusions. Cues determined by discontinuities in the edge contours of the sidewalk at the transition to the targets were vulnerable to changes in viewing conditions. Cues associated with the height in the picture plane of the targets were more robust.


Optometry and Vision Science | 2012

Seeing steps and ramps with simulated low acuity: Impact of texture and locomotion

Tiana M. Bochsler; Gordon E. Legge; Christopher S. Kallie; Rachel Gage

Purpose. Detecting and recognizing steps and ramps is an important component of the visual accessibility of public spaces for people with impaired vision. The present study, which is part of a larger program of research on visual accessibility, investigated the impact of two factors that may facilitate the recognition of steps and ramps during low-acuity viewing. Visual texture on the ground plane is an environmental factor that improves judgments of surface distance and slant. Locomotion (walking) is common during observations of a layout, and may generate visual motion cues that enhance the recognition of steps and ramps. Methods. In two experiments, normally sighted subjects viewed the targets monocularly through blur goggles that reduced acuity to either approximately 20/150 (mild blur) or 20/880 Snellen (severe blur). The subjects judged whether a step, ramp, or neither was present ahead on a sidewalk. In the texture experiment, subjects viewed steps and ramps on a surface with a coarse black-and-white checkerboard pattern. In the locomotion experiment, subjects walked along the sidewalk toward the target before making judgments. Results. Surprisingly, performance was lower with the textured surface than with a uniform surface, perhaps because the texture masked visual cues necessary for target recognition. Subjects performed better in walking trials than in stationary trials, possibly because they were able to take advantage of visual cues that were only present during motion. Conclusions. We conclude that under conditions of simulated low acuity, large high-contrast texture elements can hinder the recognition of steps and ramps, whereas locomotion enhances recognition.


Journal of Vision | 2015

Comparing the visual spans for faces and letters.

Yingchen He; Jennifer M. Scholz; Rachel Gage; Christopher S. Kallie; Tingting Liu; Gordon E. Legge

The visual span-the number of adjacent text letters that can be reliably recognized on one fixation-has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition.


Investigative Ophthalmology & Visual Science | 2012

Identification and Detection of Simple 3D Objects with Severely Blurred Vision

Christopher S. Kallie; Gordon E. Legge; Deyue Yu

PURPOSE Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. METHODS The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10-24 feet, or 3.05-7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2-6 feet, or 0.61-1.83 m), and color (gray and white). RESULTS Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). CONCLUSIONS When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed.


Journal of Vision | 2015

Local Surface Patch Classification Using Multilinear PCA+LDA on High-Order Image Structures Compared to Human Observers

Christopher S. Kallie; Eric Egan; James T. Todd

What information do we use to determine the curvatures of local surface patches? In a 5-AFC decision task, observers judged the curvatures of local surface patches viewed through an aperture, including Bells, Dimples, Furrows, Humps, and Saddles that were cylindrically projected onto a sphere. Numerous high-order image structures were computed from stimulus luminance values. Classical and Multilinear PCA were performed on image structures, which were dimensionally degraded until mean model performances (i.e., proportions of correct discriminations) matched the mean performance of observers. The posterior probability distributions of the LDA classifiers were then correlated with human error confusions. Among the image structures that were examined, the strongest predictors of human performance involved 2nd-order derivatives of the luminance patterns. Using more than one image structure at a time did not reliably improve model prediction, leading us to choose Laplacian of Gaussian arrays and Multilinear PCA+LDA for further analysis. The model accounted for approximately 33% of the error confusions that were predicted by independent human observers. In other words, the model was about 1/3 as reliable as the test-retest reliability of independent human observations. It appears as though humans may use information analogous to high-order image structures to judge local surface contours, however the exact information guiding our perceptual judgments remains uncertain. Meeting abstract presented at VSS 2015.


Personality and Individual Differences | 2010

The structure of virtue: An empirical investigation of the dimensionality of the virtues in action inventory of strengths

Jessica Shryack; Michael F. Steger; Robert F. Krueger; Christopher S. Kallie


Journal of Experimental Psychology: Human Perception and Performance | 2007

Variability in Stepping Direction Explains the Veering Behavior of Blind Walkers

Christopher S. Kallie; Paul R. Schrater; Gordon E. Legge

Collaboration


Dive into the Christopher S. Kallie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rachel Gage

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Tamaki

Federal University of São Paulo

View shared research outputs
Top Co-Authors

Avatar

Solange Rios Salomão

Federal University of São Paulo

View shared research outputs
Top Co-Authors

Avatar

Deyue Yu

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Egan

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Adriana Berezovsky

Federal University of São Paulo

View shared research outputs
Researchain Logo
Decentralizing Knowledge