Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shirin E. Hassan is active.

Publication


Featured researches published by Shirin E. Hassan.


Optometry and Vision Science | 2002

Vision and mobility performance of subjects with age-related macular degeneration.

Shirin E. Hassan; Jan E. Lovie-Kitchin; Russell L. Woods

Purpose. To investigate the effects of age-related macular degeneration (ARMD) on mobility performance and to identify the vision determinants of mobility in subjects with ARMD. Methods. Walking speed and the number of obstacle contacts made on a 79-m indoor mobility course were measured in 21 subjects with ARMD and 11 age-matched subjects with normal vision. The mobility measures were transformed to percentage preferred walking speed and contacts score. The vision functions assessed included binocular visual acuity, contrast sensitivity, and visual field. Results. In this study, subjects with ARMD did not walk significantly slower or make significantly more obstacle contacts on the mobility course than the normally sighted subjects of similar age. Between 29% and 35% of the variance in the ARMD mobility performance was accounted for by visual field and contrast sensitivity measures. The most significant predictor of mobility performance scored as percentage preferred walking speed was the size of a binocular central scotoma. Conclusion. As the size of a binocular central scotoma increases, mobility performance decreases.


Investigative Ophthalmology & Visual Science | 2009

Visual and cognitive deficits predict stopping or restricting driving: the Salisbury Eye Evaluation Driving Study (SEEDS)

Lisa Keay; Beatriz Munoz; Kathleen A. Turano; Shirin E. Hassan; Cynthia A. Munro; Donald D. Duncan; Kevin C. Baldwin; Srichand Jasti; Emily W. Gower; Sheila K. West

PURPOSE To determine the visual and other factors that predict stopping or restricting driving in older drivers. METHODS A group of 1425 licensed drivers aged 67 to 87 years, who were residents of greater Salisbury, participated. At 1 year after enrollment, this group was categorized into those who had stopped driving, drove only within their neighborhood, or continued to drive beyond their neighborhood. At baseline, a battery of structured questionnaires, vision, and cognitive tests were administered. Multivariate analysis determined the factors predictive of stopping or restricting driving 12 months later. RESULTS Of the 1425 enrolled, 1237 (87%) were followed up at 1 year. Excluding those who were already limiting their driving at baseline (n = 35), 1.5% (18/1202) had stopped and 3.4% (41/1202) had restricted their driving. The women (odds ratio [OR], 4.01; 95% confidence interval [CI], 2.05-8.20) and those who prefer to be driven (OR, 3.91; 95% CI, 1.91-8.00) were more likely to stop or restrict driving. Depressive symptoms increased likelihood of restricting or stopping driving (OR, 1.08; 95% CI, 1.009-1.16 per point Geriatric Depression Scale). Slow visual scanning and psychomotor speed (Trail Making Test, Part A: OR, 1.02; 95% CI, 1.01-1.03), poor visuoconstructional skills (Beery-Buktenica Test of Visual Motor Integration: OR, 1.14; 95% CI, 1.05-1.25), and reduced contrast sensitivity (OR, 1.15; 95% CI, 1.03-1.28) predicted stopping or reducing driving. Visual field loss and visual attention were not associated. The effect of vision on changing driving behavior was partially mediated by cognition, depression, and baseline driving preferences. CONCLUSIONS In this cohort, contrast sensitivity and cognitive function were independently associated with incident cessation or restriction of driving space. These data suggest drivers with functional deficits make difficult decisions to restrict or stop driving.


Optometry and Vision Science | 2003

Gaze Behavior while Crossing Complex Intersections

Duane R. Geruschat; Shirin E. Hassan; Kathleen A. Turano

Background. Crossing the street is a complex task that involves gathering, processing, and acting on information that is time dependent. The gaze behavior of subjects has been previously studied on increasingly complex and dynamic tasks such as making tea, walking indoors, and driving. The purpose of this study was to assess how normally sighted people use their vision to cross a street safely. Specifically, we identified the environmental features people look at when crossing two types of intersections. Method. We measured the eye movements and head directions of 12 normally sighted people as they approached, evaluated, and crossed a light-controlled “plus” intersection and a roundabout. The primary measures were percentage of fixations and head direction. Results. Crossing the street can be divided into three phases, walking to the curb, standing at the curb, and crossing the street. We found that while moving, subjects fixated primarily on crossing elements and when standing at the curb, they fixated primarily on vehicles. At the plus intersection, fixation behavior corresponded with crossing strategy; the subjects who crossed early fixated on cars, and the subjects who waited for the light to change fixated on traffic controls. At the roundabout, all subjects determined an appropriate time to cross from vehicular traffic flow by directing the majority of their fixations on cars. When moving, the head position of subjects was predominately centered. Subjects also made head turns in both directions before crossing and directed the head toward the danger zone while crossing. Conclusion. Crossing the street is a complex task that can be described in three phases. Common head and eye behaviors were found near the critical moments of crossing the street. Fixation behavior was closely related to street crossing behavior.


Ophthalmic Epidemiology | 2007

Visual and Cognitive Predictors of Performance on Brake Reaction Test: Salisbury Eye Evaluation Driving Study

Lei Zhang; Kevin C. Baldwin; Beatriz Munoz; Cynthia A. Munro; Kathleen A. Turano; Shirin E. Hassan; Constantine G. Lyketsos; Karen Bandeen-Roche; Sheila K. West

Objectives: Concern for driving safety has prompted research into understanding factors related to performance. Brake reaction speed (BRS), the speed with which persons react to a sudden change in driving conditions, is a measure of performance. Our aim is to determine the visual, cognitive, and physical factors predicting BRS in a population sample of 1425 older drivers. Methods: The Maryland Department of Motor Vehicles roster of persons aged 67–87 and residing in Salisbury, MD, was used for recruitment of the study population. Procedures included the following: habitual, binocular visual acuity using ETDRS charts, contrast sensitivity using a Pelli-Robson chart, visual fields assessed with a 81-point screening Humphrey field at a single intensity threshold, and a questionnaire to ascertain medical conditions. Cognitive status was assessed using a standard battery of tests for attention, memory, visuo-spatial, and scanning. BRS was assessed using a computer-driven device that measured separately the initial reaction speed (IRS) (from light change to red until removing foot from accelerator) and physical response speed (PRS) (removing foot from accelerator to full brake depression). Five trial times were averaged, and time was converted to speed. Results: The median brake reaction time varied from 384 to 5688 milliseconds. Age, gender, and cognition predicted total BRS, a non-informative result as there are two distinct parts to the task. Once separated, decrease in IRS was associated with low scores on cognitive factors and missing points on the visual field. A decrease in PRS was associated with having three or more physical complaints related to legs and feet, and poorer vision search. Vision was not related to PRS. Conclusion: We have demonstrated the importance of segregating the speeds for the two tasks involved in brake reaction. Only the IRS depends on vision. Persons in good physical condition may perform poorly on brake reaction tests if their vision or cognition is compromised.


Investigative Ophthalmology & Visual Science | 2008

Cognitive and vision loss affects the topography of the attentional visual field.

Shirin E. Hassan; Kathleen A. Turano; Beatriz Munoz; Cynthia A. Munro; Karen Bandeen Roche; Sheila K. West

PURPOSE The attentional visual field (AVF), which describes a persons ability to divide attention and extract visual information from the visual field (VF) within a glance, has been shown to be a good predictor of driving performance. Despite this, very little is known about the shape of the AVF and the factors that affect it. The purposes of this study were to describe the AVF in a large sample of older drivers and identify demographic, cognitive, and vision factors associated with AVF performance and shape. METHODS Registered drivers between 67 and 87 years of age, residing in Greater Salisbury, Maryland, were recruited to participate in the study. Participants underwent a battery of visual and cognitive assessments and completed various questionnaires for demographics, medical history, and history of depression. The AVF was assessed using a divided-attention protocol within the central 20 degrees radius along the four principal meridians. The shape of the AVF was classified as either symmetric or one of two asymmetric shape profiles. RESULTS Symmetrically shaped AVFs were found in just 34% of participants. AVF performance was significantly better along the horizontal (15.3 degrees ) than the vertical (11.3 degrees ) meridian (P < 0.05). After adjusting for AVF area, we found that poorer cognitive and vision performance was associated with a symmetric AVF shape. Overall AVF extent was predicted by vision and cognitive measures as well as various demographic factors. CONCLUSIONS Good vision and cognitive ability appear to be associated with having an asymmetric as opposed to a symmetric AVF shape profile.


Optometry and Vision Science | 2006

Gaze behavior of the visually impaired during street crossing.

Duane R. Geruschat; Shirin E. Hassan; Kathleen A. Turano; Harry A. Quigley; Nathan Congdon

Purpose. This study explored the gaze patterns of fully sighted and visually impaired subjects during the high-risk activity of crossing the street. Methods. Gaze behavior of 12 fully sighted subjects, nine with visual impairment resulting from age-related macular degeneration and 12 with impairment resulting from glaucoma, was monitored using a portable eye tracker as they crossed at two unfamiliar intersections. Results. All subject groups fixated primarily on vehicles and crossing elements but changed their fixation behavior as they moved from “walking to the curb” to “standing at the curb” and to “crossing the street.” A comparison of where subjects fixated in the 4-second time period before crossing showed that the fully sighted who waited for the light to change fixated on the light, whereas the fully sighted who crossed early fixated primarily on vehicles. Visually impaired subjects crossing early or waiting for the light fixate primarily on vehicles. Conclusions. Vision status affects fixation allocation while performing the high-risk activity of street crossing. Crossing decision-making strategy corresponds to fixation behavior only for the fully sighted subjects.


Optometry and Vision Science | 1998

Development and validation of a visual acuity chart for Australian Aborigines and Torres Strait Islanders.

Christine F. Wildsoet; Joanne M. Wood; Shirin E. Hassan

Background. A new visual acuity chart was designed for use with Australias indigenous population to overcome perceived inadequacies of conventional English letter charts for this group. This chart, which incorporates a black and white turtle icon, is described, and validation data are presented. Methods. The chart is based on logarithm of the minimum angle of resolution (logMAR) principles and incorporates a turtle symbol modified from the design of an indigenous artist. The task is one of discrimination, with subjects being required to distinguish the split tail of the turtle from its head, which has the same overall shape and average luminance; the body of the turtle provides no directional cues which might assist in this judgment. The chart was validated in two ways: Experiment I. Performance was compared with the Bailey-Lovie and Konig bar charts in terms of unaided visual acuity data for 90 subjects (mean age: 38.3 ± 20.3 years) and Experiment II. Data were obtained for 10 young subjects for these 3 charts and an Illiterate E chart, with refractive blur imposed with trial lenses over habitual distance corrections (spherical: +0.50, +1.00, +2.00, and +4.00 D; cylindrical: +1.00 and +2.00 D, axes 45, 90, and 180?). To avoid cultural and literacy issues as possible sources of differences in performance between the charts in this validation study, subjects were selected from the wider Australian population rather than specifically from its indigenous segment. Results. Experiment I: The Turtle chart performed most like the Konig Bar chart for this component of the validation exercise. Nonetheless, results for the Turtle chart correlated highly with those for the Bailey-Lovie chart as well as the Konig Bar chart, although there were subtle differences between charts in the rate of decline of visual acuity as visual performance decreased. Experiment II: The turtle chart behaved most like the Illiterate E chart with imposed spherical focusing errors, with the Bailey-Lovie chart showing a faster decline and the Konig Bar chart showing a slower decline in performance, with increasing defocus. All 4 charts showed similar directional biases with astigmatic defocus, being most affected by oblique (45?) astigmatism. Conclusion. The Turtle chart met the criteria set for its validation as a visual acuity chart in that it gave comparable results to the other commonly used visual acuity charts, both in the case of unaided vision and when refractive blur was imposed.


Journal of Vision | 2017

Motion-generated optical information allows event perception despite blurry vision in AMD and amblyopic patients

Jing Samantha Pan; Jingrong Li; Zidong Chen; Emily A. Mangiaracina; Christopher S. Connell; Hongyuan Wu; Xiaoye Michael Wang; Geoffrey P. Bingham; Shirin E. Hassan

Events consist of objects in motion. When objects move, their opaque surfaces reflect light and produce both static image structure and dynamic optic flow. The static and dynamic optical information co-specify events. Patients with age-related macular degeneration (AMD) and amblyopia cannot identify static objects because of weakened image structure. However, optic flow is detectable despite blurry vision because visual motion measurement uses low spatial frequencies. When motion ceases, image structure persists and might preserve properties specified by optic flow. We tested whether optic flow and image structure interact to allow event perception with poor static vision. AMD (Experiment 1), amblyopic (Experiments 2 and 3), and normally sighted observers identified common events from either blurry (Experiments 1 and 2) or clear images (Experiment 3), when either single image frames were presented, a sequence of frames was presented with motion masks, or a sequence of frames was presented with detectable motion. Results showed that with static images, but no motion, events were not perceived well by participants other than controls in Experiment 3. However, with detectable motion, events were perceived. Immediately following this and again after five days, participants were able to identify events from the original static images. So, when image structure information is weak, optic flow compensates for it and enables event perception. Furthermore, weakened static image structure information nevertheless preserves information that was once available in optic flow. The combination is powerful and allows events to be perceived accurately and stably despite blurry vision.


Optometry and Vision Science | 2014

How do vision and hearing impact pedestrian time-to-arrival judgments?

JulieAnne M. Roper; Shirin E. Hassan

Purpose To determine how accurate normally sighted male and female pedestrians were at making time-to-arrival (TTA) judgments of approaching vehicles when using just their hearing or both their hearing and vision. Methods Ten male and 14 female subjects with confirmed normal vision and hearing estimated the TTA of approaching vehicles along an unsignalized street under two sensory conditions: (1) using both habitual vision and hearing and (2) using habitual hearing only. All subjects estimated how long the approaching vehicle would take to reach them (i.e., the TTA). The actual TTA of vehicles was also measured using custom-made sensors. The error in TTA judgments for each subject under each sensory condition was calculated as the difference between the actual and estimated TTA. A secondary timing experiment was also conducted to adjust each subject’s TTA judgments for their “internal metronome.” Results Error in TTA judgments changed significantly as a function of both the actual TTA (p < 0.0001) and sensory condition (p < 0.0001). Although no main effect for gender was found (p = 0.19), the way the TTA judgments varied within each sensory condition for each gender was different (p < 0.0001). Females tended to be as accurate under either condition (p ≥ 0.01), with the exception of TTA judgments made when the actual TTA was 2 seconds or less and 8 seconds or longer, during which the vision-and-hearing condition was more accurate (p ⩽ 0.002). Males made more accurate TTA judgments under the hearing only condition for actual TTA values 5 seconds or less (p < 0.0001), after which there were no significant differences between the two conditions (p ≥ 0.01). Conclusions Our data suggest that males and females use visual and auditory information differently when making TTA judgments. Although the sensory condition did not affect the females’ accuracy in judgments, males initially tended to be more accurate when using their hearing only.


Journal of Visual Impairment & Blindness | 2005

Driver Behavior in Yielding to Sighted and Blind Pedestrians at Roundabouts.

Duane R. Geruschat; Shirin E. Hassan

Collaboration


Dive into the Shirin E. Hassan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cynthia A. Munro

Johns Hopkins University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Beatriz Munoz

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

G. D. Barnett

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Duane R. Geruschat

Johns Hopkins University School of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sheila K. West

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Jan E. Lovie-Kitchin

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge