Paula E. Tucker
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paula E. Tucker.
Attention Perception & Psychophysics | 2000
Lynne E. Bernstein; Paula E. Tucker; Marilyn E. Demorest
In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH;n=96) and with severely to profoundly impaired hearing (IH;n=72) identified consonant-vowel (CV) nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct and the amount of phoneme substitution entropy for sentences. The results demonstrated greater sensitivity to phonetic information in the IH group. Transmitted feature information was related to isolated word scores for the IH group, but not for the NH group. Phoneme errors in sentences were more systematic in the IH than in the NH group. Individual differences in phonetic perception for CVs were more highly associated with word and sentence performance for the IH than for the NH group. The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.
Memory & Cognition | 2000
Lynne E. Bernstein; Paula E. Tucker
The present study examined the sensitivity of a subjective familiarity measure to differences in word exposure within and between populations that differ dramatically in their perceptual experience. Descriptive measures of language ability and subjective familiarity ratings for 450 words were collected from a group of college-educated adults with normal hearing and a group of college-educated deaf adults. The results demonstrate the sensitivity of subjective familiarity ratings to both between- and within-group differences in word experience. Specifically, the deaf participants consistently rated words as less familiar than did hearing participants. Furthermore, item-level correlations within a participant group were higher than ones between groups. Within groups, mean familiarity ratings were correlated with descriptive measures of language ability. The results are discussed in relation to a simple sampling model of word experience and the language experience of the participant groups.
conference on computers and accessibility | 2013
Christian Vogler; Paula E. Tucker; Norman Williams
In this experience report we describe the accessibility challenges that deaf and hard of hearing committee members faced while collaborating with a larger group of hearing committee members over a period of 2½ years. We explain what some recurring problems are, how audio-only conferences fall short even when relay services and interpreters are available, and how we devised a videoconferencing setup using FuzeMeeting to minimize the accessibility barriers. We also describe some best practices, as well as lessons learned, and pitfalls to avoid in deploying this type of setup.
Journal of the Acoustical Society of America | 1996
Robin S. Waldstein; Edward T. Auer; Paula E. Tucker; Lynne E. Bernstein
Word familiarity is an important factor in word recognition and lexical access for hearing individuals. Subjective word familiarity ratings are hypothesized to reflect experience with words irrespective of the modality (i.e., spoken or written) through which exposure has taken place, and to provide an estimate of the size of the mental lexicon. To investigate how word familiarity is related to lipreading proficiency, 450 printed words were presented for rating on a seven‐point scale to 50 deaf and 50 hearing participants. Preliminary results revealed that the deaf participants produced lower mean familiarity ratings than did the hearing participants, for high‐, medium‐, and low‐familiarity words (hearing means= 6.7, 4.8, 3.0; deaf means= 6.0, 3.8, 2.6). Among the deaf participants, correlations between established familiarity ratings and individuals’ ratings were reliably higher for excellent than for good lipreaders, a possible indication that perceptual experience influences the structure of the lexicon. At the same time, the performance of the excellent lipreaders provides support for the hypothesis that lexical organization does not depend on the perceptual input modality (i.e., vision versus hearing). [Work supported by NIH Grant No. DC00695.]
Journal of the Acoustical Society of America | 1996
Edward T. Auer; Robin S. Waldstein; Paula E. Tucker; Lynne E. Bernstein
In individuals with normal hearing, words estimated to be learned earlier are recognized more rapidly than words estimated to be learned later. To investigate how word knowledge is related to lipreading proficiency, word age‐of‐acquisition (AOA) estimates were obtained from 50 hearing (H) and 50 deaf (D) (80‐dB HL pure‐tone average or greater hearing losses acquired before the age of 48 months) adults. Participants judged AOA for the 175 words in Form M of the Peabody Picture Vocabulary Test‐Revised using an 11‐point scale, and responded whether the words were acquired through speech, sign language, or orthography. The two groups differed in when (mean AOA: H = 8.9 years, D = 10.6 years) and how (H=69% speech and 31% orthography; D=38% speech, 45% orthography, and 17% sign language) words were judged to be acquired. However, item analyses revealed that the relative acquisition order was essentially identical across groups (r=0.965). Interestingly, within the deaf group, better lipreaders estimated that mo...
Journal of the Acoustical Society of America | 1993
Philip F. Seitz; Brad Rakerd; Paula E. Tucker
Reaction times to spoken digits presented in a speeded memory scanning procedure were measured for groups of listeners with normal hearing (N=24) and with congenital or early‐onset sensorineural hearing losses (N=12). Separate groups of 12 normal‐hearing listeners were tested under conditions of good stimulus quality (no distortion, high SNR) versus poor stimulus quality (low‐pass filtered, low SNR). The speeded memory scanning procedure allows total reaction time to be decomposed into ‘‘encoding’’ and ‘‘comparison’’ components which correspond to separate stages in a model of human information processing. At issue here is whether long‐term effects of hearing loss, such as possible deficits in phonological and/or lexical representation of spoken language, lead to unusual processing costs at either the encoding or comparison stage. Experimental results suggest that impaired listeners incur the same encoding costs as normal‐hearing listeners presented with poor‐quality stimuli. However, comparison costs for...
conference on computers and accessibility | 2013
Linda Kozma-Spytek; Paula E. Tucker; Christian Vogler
We present a study into the effects of the addition of a video channel, video frame rate, and audio-video synchrony, on the ability of people with hearing loss to understand spoken language during video telephone conversations. Analysis indicates that higher frame rates result in a significant improvement in speech understanding, even when audio and video are not perfectly synchronized. At lower frame rates, audio-video synchrony is critical: if the audio is perceived 100 ms ahead of video, understanding drops significantly; if on the other hand the audio is perceived 100 ms behind video, understanding does not degrade versus perfect audio-video synchrony. These findings are validated in extensive statistical analysis over two within-subjects experiments with 24 and 22 participants, respectively.
Journal of the Acoustical Society of America | 1993
Lynne E. Bernstein; David C. Coulter; Paula E. Tucker; Marilyn E. Demorest
The possible benefit of a wearable, single‐channel versus eight‐channel tactile aid for conveying voice fundamental frequency (F0) was estimated in three experiments. Severely or profoundly hearing‐impaired (HI) and normal‐hearing (NH) adults identified position of stressed words and rising versus falling intonation in sentences previously recorded for this purpose by Bernstein et al. [J. Acoust. Soc. Am. 85, 397–405 (1989)]. In experiment 1, NH subjects performed the identification task in counter‐balanced visual‐alone (VA) and visual‐tactile (VT) conditions. Both tactile configurations conveyed intonation but neither conveyed stress. In experiment 2, NH subjects performed the task tactile alone. Both stress and intonation were conveyed. In experiment 3, pre‐ and post‐lingually HI subjects demonstrated effects of the aid for identification of intonation but not of stress. As in the previous study (Bernstein et al., 1989), visual stress was highly accurate in all VA conditions and tactile information show...
Journal of Speech Language and Hearing Research | 2001
Lynne E. Bernstein; Paula E. Tucker
Archive | 1998
Lynne E. Bernstein; Marilyn E. Demorest; Paula E. Tucker