Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M.D. Rutherford is active.

Publication


Featured researches published by M.D. Rutherford.


Cognitive Neuropsychology | 2007

Evidence of a divided-attention advantage in autism

M.D. Rutherford; Eric D. Richards; Vanessa Moldes; Allison B. Sekuler

People with autism spectrum disorders appear to have some specific advantages in visual processing, including an advantage in visual search tasks. However, executive function theory predicts deficits in tasks that require divided attention, and there is evidence that people with autism have difficulty broadening their attention (Mann & Walker, 2003). We wanted to know how robust the known attentional advantage is. Would people with autism have difficulty dividing attention between central and peripheral tasks, as is required in the Useful Field of View task, or would they show an advantage due to strengths in visual search? Observers identified central letters and localized peripheral targets under both focused- and divided-attention conditions. Participants were 20 adults with high-functioning autism and Aspergers syndrome and 20 adults matched to the experimental group on education, age, and IQ. Contrary to some predictions, individuals with autism tended to show relatively smaller divided-attention costs than did matched adults. These results stand in stark contrast to the predictions of some prevalent theories of visual and cognitive processing in autism.


Vision Research | 2013

Comparing face processing strategies between typically-developed observers and observers with autism using sub-sampled-pixels presentation in response classification technique.

Masayoshi Nagai; Patrick J. Bennett; M.D. Rutherford; Carl M. Gaspar; Takatsune Kumada; Allison B. Sekuler

In the present study we modified the standard classification image method by subsampling visual stimuli to provide us with a technique capable of examining an individuals face-processing strategy in detail with fewer trials. Experiment 1 confirmed that one testing session (1450 trials) was sufficient to produce classification images that were qualitatively similar to those obtained previously with 10,000 trials (Sekuler et al., 2004). Experiment 2 used this method to compare classification images obtained from observers with autism spectrum disorders (ASD) and typically-developing (TD) observers. As was found in Experiment 1, classification images obtained from TD observers suggested that they all discriminated faces based on information conveyed by pixels in the eyes/brow region. In contrast, classification images obtained from ASD observers suggested that they used different perceptual strategies: three out of five ASD observers used a typical strategy of making use of information in the eye/brow region, but two used an atypical strategy that relied on information in the forehead region. The advantage of using the response classification technique is that there is no restriction to specific theoretical perspectives or a priori hypotheses, which enabled us to see unexpected strategies, like ASDs forehead strategy, and thus showed this technique is particularly useful in the examination of special populations.


Vision Research | 2015

Adults with autism spectrum disorder show evidence of figural aftereffects with male and female faces

Jennifer A. Walsh; Mark D. Vida; Marcus Neil Morrisey; M.D. Rutherford

The norm-based coding model of face perception posits that face perception involves an implicit comparison of observed faces to a representation of an average face (prototype) that is shaped by experience. Using some methods, observers with autism spectrum disorder (ASD) have shown atypical face perception, but other methods suggest preserved face perception. Here, we used a figural aftereffects paradigm to test whether adults with ASD showed evidence of norm-based coding of faces, and whether they encode separate prototypes for male and female faces, as typical observers do. Following prolonged exposure to distorted faces that differ from their stored prototype, neurotypical adults show aftereffects: their prototype shifts in the direction of the adapting face. We measured aftereffects following adaptation to one distorted gender. There were no significant group differences in the size or direction of the aftereffects; both groups showed sex-selective aftereffects after adapting to expanded female faces but showed aftereffects for both sexes after adapting to contracted face of either sex, demonstrating that adults with and without ASD show evidence of partially dissociable male and female face prototypes. This is the first study to examine sex-selective prototypes using figural aftereffects in adults with ASD and replicates the findings of previous studies examining aftereffects in adults with ASD. The results contrast with studies reporting diminished adaptation in children with ASD.


Perception | 2008

Reading-Related Habitual Eye Movements Produce a Directional Anisotropy in the Perception of Speed and Animacy

Paul A. Szego; M.D. Rutherford

Judgments of speed and animacy from monolingual English readers were compared with those of bilingual readers of both English and a language read from right to left. Participants viewed a pair of dots moving horizontally across a screen at the same speed. Using a two-alternative forced-choice task, participants judged which dot in a pair moved faster (a direct measure of speed perception) or appeared to be alive (an indirect and correlated judgment of speed perception). In two experiments monolingual participants judged dots moving left to right to be faster and alive more often than dots moving right to left. In contrast, bilingual participants exhibited no directional bias for speed or animacy. These results suggest that the highly practiced eye movements involved in reading are associated with the presence or absence of a directional anisotropy for speed and animacy.


Journal of Vision | 2015

Effects of social stimuli on covert attentional orienting and saccaddic eye-movements during visual search

Marcus Neil Morrisey; M.D. Rutherford

Although visual attention and saccadic eye movements are tightly linked, our attention can move to objects in visual space without a saccade to the object, a phenomenon called covert attentional orienting. Socially significant targets like faces and human bodies attract attention. Using a visual search task we examined reaction to social targets by comparing the relationship between performance measures such as reaction time and error rate and saccadic eye movement measures. Participants briefly viewed a word representing 1 of 6 categories. One image from each category then appeared in a circular array on the screen. Participants identified the image in a target frame (the green frame) as either matching or not matching the presented word. On half of the trials, a distracter frame (the red frame) was also present. Consistent with previous results, participants responded faster when seeking a social target (face or body) compared to non-social targets and this effect was not diminished by inversion. They were slower and more error prone on trials containing a distracter frame. Participants saccadeed first and more often to social targets than to non-social targets but spent less time focused on social targets. When images were inverted, participants did not saccade more often to social stimuli than non-social distracters. Participants varied widely on the proportion of trials in which they saccaded to any object, between 2% and 97%,suggesting that some participants are capable of performing this task peripherally. Indeed, a lower proportion of trials with saccades to targets was associated with faster RT. The evidence supported an attentional effect of social stimuli that is independent of saccadic eye movement in addition to modulation of looking behavior. Meeting abstract presented at VSS 2015.


Perception | 2010

Mapping emotion category boundaries using a visual expectation paradigm.

Jenna L. Cheal; M.D. Rutherford

Past research showing categorical perception of emotional facial expressions has relied on identification and discrimination tasks that require an explicit response via keypress. Here we report a new paradigm for investigating the category boundary of emotional facial expressions that, instead, relies on an implicit response—eye direction. Participants were trained to expect a target stimulus on a particular side of the monitor, predicted by an emotional expression on a face image. An eye-tracker then recorded eye movements of participants as they viewed novel intermediate facial-expression stimuli. Anticipatory eye movement was taken as evidence of categorisation. Results from two experiments suggest that this implicit method can be used to determine category boundaries, and that the boundaries found with this method are similar to those found with the keypress response.


Vision Research | 2007

Differences in discrimination of eye and mouth displacement in autism spectrum disorders

M.D. Rutherford; Kathleen A. Clements; Allison B. Sekuler


Journal of Vision | 2007

Actual and illusory differences in constant speed influence the perception of animacy similarly.

Paul A. Szego; M.D. Rutherford


Evolution and Human Behavior | 2006

Looking for loss in all the wrong places: loss avoidance does not explain cheater detection

Laurence Fiddick; M.D. Rutherford


Journal of Autism and Developmental Disorders | 2015

Brief Report: Infants Developing with ASD Show a Unique Developmental Pattern of Facial Feature Scanning

M.D. Rutherford; Jennifer A. Walsh; Vivian Lee

Collaboration


Dive into the M.D. Rutherford's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masayoshi Nagai

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge