Arthur Boothroyd
City University of New York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arthur Boothroyd.
Journal of the Acoustical Society of America | 1988
Arthur Boothroyd; Susan Nittrouer
Percent recognition of phonemes and whole syllables, measured in both consonant-vowel-consonant (CVC) words and CVC nonsense syllables, is reported for normal young adults listening at four signal-to-noise (S/N) ratios. Similar data are reported for the recognition of words and whole sentences in three types of sentence: high predictability (HP) sentences, with both semantic and syntactic constraints; low predictability (LP) sentences, with primarily syntactic constraints; and zero predictability (ZP) sentences, with neither semantic nor syntactic constraints. The probability of recognition of speech units in context (pc) is shown to be related to the probability of recognition without context (pi) by the equation pc = 1 - (1-pi)k, where k is a constant. The factor k is interpreted as the amount by which the channels of statistically independent information are effectively multiplied when contextual constraints are added. Empirical values of k are approximately 1.3 and 2.7 for word and sentence context, respectively. In a second analysis, the probability of recognition of wholes (pw) is shown to be related to the probability of recognition of the constituent parts (pp) by the equation pw = pjp, where j represents the effective number of statistically independent parts within a whole. The empirically determined mean values of j for nonsense materials are not significantly different from the number of parts in a whole, as predicted by the underlying theory. In CVC words, the value of j is constant at approximately 2.5. In the four-word HP sentences, it falls from approximately 2.5 to approximately 1.6 as the inherent recognition probability for words falls from 100% to 0%, demonstrating an increasing tendency to perceive HP sentences either as wholes, or not at all, as S/N ratio deteriorates.
Journal of the Acoustical Society of America | 1986
Arthur Boothroyd
A wearable tactile sensory aid which presents a vibratory signal representative of voice pitch and intonation patterns to the skin. The vibratory signal consists of a constant amplitude square wave having a frequency equal to, or some fraction of, the fundamental frequency of the speech input. The vibratory signal is applied as a tactile stimulus, and is displaced along a linear array of transducers in contact with the skin in proportion to the logarithm of the fundamental frequency. Accordingly, the wearable tactile display encodes the fundamental frequency to provide both frequency of actuation and spatial indications thereof.
Journal of the Acoustical Society of America | 2000
Laurie S. Eisenberg; Robert V. Shannon; Amy S. Martinez; John Wygonski; Arthur Boothroyd
Adult listeners are able to recognize speech even under conditions of severe spectral degradation. To assess the developmental time course of this robust pattern recognition, speech recognition was measured in two groups of children (5-7 and 10-12 years of age) as a function of the degree of spectral resolution. Results were compared to recognition performance of adults listening to the same materials and conditions. The spectral detail was systematically manipulated using a noise-band vocoder in which filtered noise bands were modulated by the amplitude envelope from the same spectral bands in speech. Performance scores between adults and older children did not differ statistically, whereas scores by younger children were significantly lower; they required more spectral resolution to perform at the same level as adults and older children. Part of the deficit in younger children was due to their inability to utilize fully the sensory information, and part was due to their incomplete linguistic/cognitive development. The fact that young children cannot recognize spectrally degraded speech as well as adults suggests that a long learning period is required for robust acoustic pattern recognition. These findings have implications for the application of auditory sensory devices for young children with early-onset hearing loss.
Journal of the Acoustical Society of America | 1990
Susan Nittrouer; Arthur Boothroyd
Perception is influenced both by characteristics of the stimulus, and by the context in which it is presented. The relative contributions of each of these factors depend, to some extent, on perceiver characteristics. The contributions of word and sentence context to the perception of phonemes within words and words within sentences, respectively, have been well studied for normal, young adults. However, far less is known about these context effects for much younger and older listeners. In the present study, measures of these context effects were obtained from young children (ages 4 years 6 months to 6 years 6 months) and from older adults (over 62 years), and compared with those of the young adults in an earlier study [A. Boothroyd and S. Nittrouer, J. Acoust. Soc. Am. 84, 101-114 (1988)]. Both children and older adults demonstrated poorer overall recognition scores than did young adults. However, responses of children and older adults demonstrated similar context effects, with two exceptions: Children used the semantic constraints of sentences to a lesser extent than did young or older adults, and older adults used lexical constraints to a greater extent than either of the other two groups.
Journal of the Acoustical Society of America | 2000
Brett A. Martin; Arthur Boothroyd
The acoustic change complex (ACC) is a scalp-recorded negative-positive voltage swing elicited by a change during an otherwise steady-state sound. The ACC was obtained from eight adults in response to changes of amplitude and/or spectral envelope at the temporal center of a three-formant synthetic vowel lasting 800 ms. In the absence of spectral change, the group mean waveforms showed a clear ACC to amplitude increments of 2 dB or more and decrements of 3 dB or more. In the presence of a change of second formant frequency (from perceived /u/ to perceived /i/), amplitude increments increased the magnitude of the ACC but amplitude decrements had little or no effect. The fact that the just detectable amplitude change is close to the psychoacoustic limits of the auditory system augurs well for the clinical application of the ACC. The failure to find a condition under which the spectrally elicited ACC is diminished by a small change of amplitude supports the conclusion that the observed ACC to a change of spectral envelope reflects some aspect of cortical frequency coding. Taken together, these findings support the potential value of the ACC as an objective index of auditory discrimination capacity.
Ear and Hearing | 1999
Brett A. Martin; Arthur Boothroyd
OBJECTIVE 1) To determine whether the N1-P2 acoustic change complex is elicited by a change of periodicity in the middle of an ongoing stimulus, in the absence of changes of spectral envelope or rms intensity. 2) To compare the N1-P2 acoustic change complex with the mismatch negativity elicited by the same stimuli in terms of amplitude and signal to noise ratio. DESIGN The signals used in this study were a tonal complex and a band of noise having the same spectral envelope and rms intensity. For elicitation of the acoustic change complex, the signals were concatenated to produce two stimuli that changed in the middle (noise-tone, tone-noise). Two control stimuli were created by concatenating two copies of the noise and two copies of the tone (noise-only, tone-only). The stimuli were presented using an onset-to-onset interstimulus interval of 3 sec. For elicitation of the mismatch negativity, the tonal complex and noise band stimuli were presented using an oddball paradigm (deviant probability = 0.14) with an onset-to-onset interstimulus interval of 600 msec. The stimuli were presented via headphones at 80 dB SPL to 10 adults with normal hearing. Subjects watched a silent video during testing. RESULTS The responses to the noise-only and tone-only stimuli showed a clear N1-P2 complex to the onset of stimulation followed by a sustained potential that continued until the offset of stimulation. The noise-tone and tone-noise stimuli elicited an additional N1-P2 acoustic change complex in response to the change in periodicity occurring in the middle. The acoustic change complex was larger for the tone-noise stimulus than for the noise-tone stimulus. A clear mismatch negativity was elicited by both the noise band and tonal complex stimuli. In contrast to the acoustic change complex, there was no significant difference in amplitude across the two stimuli. The acoustic change complex was a more sensitive index of peripheral discrimination capacity than the mismatch negativity, primarily because its average amplitude was 2.5 times as large. CONCLUSIONS These findings indicate that both the acoustic change complex and the mismatch negativity are sensitive indexes of the neural processing of changes in periodicity, though the acoustic change complex has an advantage in terms of amplitude. The results support the possible utility of the acoustic change complex as a clinical tool in the assessment of peripheral speech perception capacity.
Ear and Hearing | 1998
Jodi Ostroff; Brett A. Martin; Arthur Boothroyd
Objective: To investigate whether the evoked potential to a complex naturally produced speech syllable could be decomposed to reflect the contributions of the acoustic events contained in the constituent phonemes. Design: Auditory cortical evoked potentials N1 and P2 were obtained in eight adults with normal hearing. Three naturally produced speech stimuli were used: 1) the syllable [sei]; 2) the sibilant [s], extracted from the syllable; 3) the vowel [ei] extracted from the syllable. The isolated sibilant and vowel preserved the same time relationships to the sampling window as they did in the complete syllable. Evoked potentials were collected at Fz, Cz, Pz, A1, and A2, referenced to the nose. Results: In the group mean waveforms, clear responses were observed to both the sibilant and the isolated vowel. Although the response to the [s] was weaker than that to [ei], both had N1 and P2 components with latencies, in relation to sound onset, appropriate to cortical onset potentials. The vowel onset response was preserved in the response to the complete syllable though with reduced amplitude. This pattern was observable in six of the eight waveforms from individual subjects. Conclusions: It seems likely that the response to [ei] within the complete syllable reflects changes of cortical activation caused by amplitude or spectral change at the transition from consonant to vowel. The change from aperiodic to periodic stimulation may also produce changes in cortical activation that contribute to the observed response. Whatever the mechanism, the important conclusion is that the auditory cortical evoked potential to complex, time‐varying speech waveforms can reflect features of the underlying acoustic patterns. Such potentials may have value in the evaluation of speech perception capacity in young hearing‐impaired children.
Ear and Hearing | 1991
Arthur Boothroyd; Ann E. Geers; Jean S. Moog
CHILDHOOD DEAFNESS CAN have serious and far-reaching effects on development but, with appropriate intervention, these effects can be reduced (see, for example, Boothroyd. 1988; Davis & Silverman, 1978; Ling & Ling, 1978). One component of modern intervention is effective use of the auditory capacity possessed by most deaf children. With modern hearing aids and suitable training, this “residual hearing” can play a significant role, and often the primary role, in the development of receptive and expressive spoken language skills and language competence (see, for example, Bess, Freeman, & Sinclair, 198 1 ; Ling & Milne, 198 1 : Ross & Giolas, 1978). There have always been deaf children, however, who have little or no residual hearing. Even with the most powerful hearing aids they perceive only gross variations of speech amplitude over time, and for many, this perception is mediated not by the sense of hearing, but by the sense of touch (Boothroyd & Cawkwell, 1970; Erber, 1978, 1979; Nober, 1967). These children account for, perhaps, 5 to 10% of the population with hearing losses of sufficient severity to prevent the spontaneous acquisition of spoken language skills. They typically have better-ear, three-frequency, average thresholds in excess of 110 dB. With the advent of cochlear implants it has become possible to provide many such children with useful auditory capacity. The purpose of the present paper is to discuss the practical implications of this development. Five questions will be addressed: 1. What exactly are the auditory capacities of implanted children? 2. How much greater are these capacities than could reasonably have been expected from hearing aids? 3. Based on our experience with less severely deaf children, how much difference should the improvement of auditory capacity, after implanta4.
Ear and Hearing | 1992
Irving Hochberg; Arthur Boothroyd; Mark Weiss; Sharon Hellman
The recognition of phonemes in consonant-vowel-consonant words, presented in speech-shaped random noise, was measured as a function of signal to noise ratio (S/N) in 10 normally hearing adults and 10 successful adult users of the Nucleus cochlear implant. Optimal scores (measured at a S/N of +25 dB) were 98% for the average normal subject and 42% for the average implantee. Phoneme recognition threshold was defined as the S/N at which the phoneme recognition score fell to 50% of its optimal value. This threshold was -2 dB for the average normal subject and +9 dB for the average implantee. Application of a digital noise suppression algorithm (INTEL) to the mixed speech plus noise signal had no effect on the optimal phoneme recognition score of either group or on the phoneme recognition threshold of the normal group. It did, however, improve the phoneme recognition threshold of the implant group by an average of 4 to 5 dB. These findings illustrate the noise susceptibility of Nucleus cochlear implant users and suggest that single-channel digital noise reduction techniques may offer some relief from this problem.
Ear and Hearing | 1988
Arthur Boothroyd; Theresa Hnath-Chisolm; Laurie Hanin; Liat Kishon-Rabin
Recognition of words in conversational sentences of known topic was measured in nine normally hearing subjects by speechreading alone and by speechreading supplemented with auditory presentation of the output of an electroglottograph. Mean word recognition probability rose from 30% to 77% with the addition of the acoustic signal. When this signal was filtered to remove possible high-frequency spectral cues, the supplemented score fell, but only by a marginally significant 7 percentage points, supporting the conclusion that voice fundamental frequency was the principal source of enhancement. Enhancement occurred for all subjects, regardless of speechreading competence.