Sharon M. Thomas
University of Nottingham
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sharon M. Thomas.
Cognitive Neuropsychology | 2003
Timothy R. Jordan; Geoffrey R. Patching; Sharon M. Thomas
The advantage for words in the right visual hemifield (RVF) has been assigned parallel orthographic processing by the left hemisphere and sequential by the right. However, an examination of previous studies of serial position performance suggests that orthographic processing in each hemifield is modulated by retinal eccentricity. To investigate this issue, we presented words at eccentricities of 1, 2, 3, and 4 degrees. Serial position performance was measured using the Reicher-Wheeler task to suppress influences of guesswork and an eye-tracker controlled fixation location. Greater eccentricities produced lower overall levels of performance in each hemifield although RVF advantages for words obtained at each eccentricity (Experiments 1 and 2). However, performance in both hemifields revealed similar U-shaped serial position performance at all eccentricities. Moreover, this performance was not influenced by lexical constraint (high, low; Experiment 2) or status (word, nonword; Experiment 3), although only words (not nonwords) produced an RVF advantage. These findings suggest that although each RVF advantage was produced by left-hemisphere function, the same pattern of orthographic analysis was used by each hemisphere at each eccentricity.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2003
Timothy R. Jordan; Sharon M. Thomas; Geoffrey R. Patching; Kenneth C. Scott-Brown
Exterior letter pairs (e.g., d--k in dark) play a major role in single-word recognition, but other research (D. Briihl & A. W. Inhoff, 1995) indicates no such role in reading text. This issue was examined by visually degrading letter pairs in three positions in words (initial, exterior, and interior) in text. Each degradation slowed reading rate compared with an undegraded control. However, whereas degrading initial and interior pairs slowed reading rate to a similar extent, degrading exterior pairs slowed reading rate most of all. Moreover, these effects were obtained when letter identities across pair positions varied naturally and when they were matched. The findings suggest that exterior letter pairs play a preferential role in reading, and candidates for this role are discussed.
Journal of Experimental Psychology: Human Perception and Performance | 2004
Sharon M. Thomas; Timothy R. Jordan
Seeing a talkers face influences auditory speech recognition, but the visible input essential for this influence has yet to be established. Using a new seamless editing technique, the authors examined effects of restricting visible movement to oral or extraoral areas of a talking face. In Experiment 1, visual speech identification and visual influences on identifying auditory speech were compared across displays in which the whole face moved, the oral area moved, or the extraoral area moved. Visual speech influences on auditory speech recognition were substantial and unchanging across whole-face and oral-movement displays. However, extraoral movement also influenced identification of visual and audiovisual speech. Experiments 2 and 3 demonstrated that these results are dependent on intact and upright facial contexts, but only with extraoral movement displays.
Perception | 1999
Timothy R. Jordan; Sharon M. Thomas; Kenneth C. Scott-Brown
We present a demonstration of word perception in which stimuli containing very few letters (just 50% of their original number) are presented for unlimited durations and yet are seen unequivocally as complete words. The phenomenon suggests that recognition of words can be achieved even when perception of their component letters is prevented.
Attention Perception & Psychophysics | 2002
Sharon M. Thomas; Timothy R. Jordan
Perception of visual speech and the influence of visual speech on auditory speech perception is affected by the orientation of a talker’s face, but the nature of the visual information underlying this effect has yet to be established. Here, we examine the contributions of visually coarse (configural) and fine (featural) facial movement information to inversion effects in the perception of visual and audiovisual speech. We describe two experiments in which we disrupted perception of fine facial detail by decreasing spatial frequency (blurring) and disrupted perception of coarse configural information by facial inversion. For normal, unblurred talking faces, facial inversion had no influence on visual speech identification or on the effects of congruent or incongruent visual speech movements on perception of auditory speech. However, for blurred faces, facial inversion reduced identification of unimodal visual speech and effects of visual speech on perception of congruent and incongruent auditory speech. These effects were more pronounced for words whose appearance may be defined by fine featural detail. Implications for the nature of inversion effects in visual and audiovisual speech are discussed.
Attention Perception & Psychophysics | 2000
Timothy R. Jordan; Maxine V. Mccotter; Sharon M. Thomas
Research has shown that auditory speech recognition is influenced by the appearance of a talker’s face, but the actual nature of this visual information has yet to be established. Here, we report three experiments that investigated visual and audiovisual speech recognition using color, gray-scale, and point-light talking faces (which allowed comparison with the influence of isolated kinematic information). Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /va/, and /vi/ were used to produce auditory, visual, congruent, and incongruent audiovisual speech stimuli. Visual speech identification and visual influences on identifying the auditory components of congruent and incongruent audiovisual speech were identical for color and gray-scale faces and were much greater than for point-light faces. These results indicate that luminance, rather than color, underlies visual and audiovisual speech perception and that this information is more than the kinematic information provided by point-light faces. Implications for processing visual and audiovisual speech are discussed.
Journal of Experimental Psychology: Human Perception and Performance | 2001
Timothy R. Jordan; Sharon M. Thomas
The authors investigated the effects of changes in horizontal viewing angle on visual and audiovisual speech recognition in 4 experiments, using a talkers face viewed full face, three quarters, and in profile. When only experimental items were shown (Experiments 1 and 2), identification of unimodal visual speech and visual speech influences on congruent and incongruent auditory speech were unaffected by viewing angle changes. However, when experimental items were intermingled with distractor items (Experiments 3 and 4), identification of unimodal visual speech decreased with profile views, whereas visual speech influences on congruent and incongruent auditory speech remained unaffected by viewing angle changes. These findings indicate that audiovisual speech recognition withstands substantial changes in horizontal viewing angle, but explicit identification of visual speech is less robust. Implications of this distinction for understanding the processes underlying visual and audiovisual speech recognition are discussed.
Cognitive Neuropsychology | 2003
Timothy R. Jordan; Geoffrey R. Patching; Sharon M. Thomas
The anatomical arrangement of the human visual system offers considerable scope for investigating functional asymmetries in hemispheric processing. In particular, because each hemisphere receives information initially from the contralateral visual hemifield, visual stimuli presented to the left of a central fixation point can be projected directly to the right hemisphere and visual stimuli presented to the right of a central fixation point can be projected directly to the left hemisphere. Numerous studies using displays of this type suggest that, for the vast majority of individuals, written words produce different patterns of performance when presented to different hemifields and these findings have inspired considerable debate about the processes available for word recognition in each hemisphere.
Visual Cognition | 2009
Elisa Back; Timothy R. Jordan; Sharon M. Thomas
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2003
Timothy R. Jordan; Sharon M. Thomas; Geoffrey R. Patching
D. Briihl and A. W. Inhoff (1995; see record 1995-20036-001) found that exterior letter pairs showed no privileged status in reading when letter pairs were presented as parafoveal primes. However, T. R. Jordan, S. M. Thomas, G. R. Patching, and K. C. Scott-Brown (2003; see record 2003-07955-013) used a paradigm that (a) allowed letter pairs to exert influence at any point in the reading process, (b) overcame problems with the stimulus manipulations used by Briihl and Inhoff (1995), and (c) revealed a privileged status for exterior letter pairs in reading. A. W. Inhoff, R. Radach, B. M. Eiter, and M. Skelly (2003; see record 2003-07955-014) made a number of claims about the Jordan, Thomas, et al. study, most of which focus on parafoveal processing. This article addresses these claims and points out that although studies that use parafoveal previews provide an important contribution, other techniques and paradigms are required to reveal the full role of letter pairs in reading.