Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian E. Walden is active.

Publication


Featured researches published by Brian E. Walden.


Journal of the Acoustical Society of America | 1998

Auditory-visual speech recognition by hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration

Ken W. Grant; Brian E. Walden; Philip F. Seitz

Factors leading to variability in auditory-visual (AV) speech recognition include the subjects ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a background of speech-shaped noise at 0-dB signal-to-noise ratio. Most subjects achieved substantial AV benefit for both sets of materials relative to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall recognition score achieved and in the amount of audiovisual gain. To account for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A and V sources of information. In addition, a measure of integration ability was derived for each subject using recently developed models of AV integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing + manner cues, (2) the ability to integrate A and V consonant cues varied significantly across subjects, with better integrators achieving more AV benefit, and (3) significant intra-modality correlations were found between consonant measures and sentence measures, with AV consonant scores accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that speechreading and AV integration training could be useful for some individuals, potentially providing as much as 26% improvement in AV consonant recognition.


Ear and Hearing | 1987

Description and validation of an LDL procedure designed to select SSPL90.

David B. Hawkins; Brian E. Walden; Allen A. Montgomery; Robert A. Prosek

A new procedure is described for measuring loudness discomfort levels (LDLs) for the purpose of selecting SSPL90 characteristics of hearing aids. The person is seated in sound field wearing a high-output hearing aid (with a known amount of 2 cm3 coupler gain) connected to a personal earmold. The loudness of frequency-specific signals is rated from a series of loudness category descriptors. The LDL is defined in terms of SPL developed in a 2 cm3 coupler, thus making selection of SSPL90 from hearing aid specification sheets practical. Experiments on LDL stability over time and validation of the SSPL90 selection are reported.


Journal of the Acoustical Society of America | 1993

EVALUATING THE ARTICULATION INDEX FOR AUDITORY-VISUAL CONSONANT RECOGNITION

Ken W. Grant; Brian E. Walden

Adequacy of the ANSI standard for calculating the articulation index (AI) [ANSI S3.5‐1969 (R1986)] was evaluated by measuring auditory (A), visual (V), and auditory–visual (AV) consonant recognition under a variety of bandpass‐filtered speech conditions. Contrary to ANSI predictions, filter conditions having the same auditory AI did not necessarily result in the same auditory–visual AI. Low‐frequency bands of speech tended to provide more benefit to AV consonant recognition than high‐frequency bands. Analyses of the auditory error patterns produced by the different filter conditions showed a strong negative correlation between the degree of A and V redundancy and the amount of benefit obtained when A and V cues were combined. These data indicate that the ANSI auditory–visual AI procedure is inadequate for predicting AV consonant recognition performance under conditions of severe spectral shaping.


Ear and Hearing | 2003

Identifying dead regions in the cochlea: psychophysical tuning curves and tone detection in threshold-equalizing noise.

Van Summers; Michelle R. Molis; Hannes Müsch; Brian E. Walden; Rauna K. Surr; Cord Mt

Objective Recent studies indicate that high-frequency amplification may provide little benefit for listeners with moderate-to-severe high-frequency hearing loss, and may even reduce speech recognition. Moore and colleagues have proposed a direct link between this lack of benefit and the presence of regions of nonfunctioning inner hair cells (dead regions) in the basal cochlea and have suggested that psychophysical tuning curves (PTCs) and tone detection thresholds in threshold-equalizing noise (TEN) are psychoacoustic measures that allow detection of dead regions ([Moore, Huss, Vickers, Glasberg, & Alcántara, 2000]; [Vickers, Moore, & Baer, 2001]). The experiments reported here examine the consistency of TEN and PTC tasks in identifying dead regions in listeners with high-frequency hearing loss. Design Seventeen listeners (18 ears) with steeply sloping moderate-to-severe high-frequency hearing loss were tested in PTC and TEN tasks intended to identify ears with high-frequency dead regions. In the PTC task, pure-tone signals of fixed level were masked by narrowband noise that slowly increased in center frequency. For a range of signal frequencies, noise levels at masked threshold were determined as a function of masker frequency. In the TEN task, masked thresholds for pure-tone signals were determined for a fixed-level, 70 dB/ERB TEN masker (for some listeners, 85 or 90 dB/ERB TEN was also tested at selected probe frequencies). Results TEN and PTC results agreed on the presence or absence of dead regions at all tested frequencies in 10 of 18 cases (∼56% agreement rate). Six ears showed results consistent with either mid- or high-frequency dead regions in both tasks, and four ears did not show evidence of dead regions in either task. In eight ears, the TEN and PTC tasks produced conflicting results at one or more frequencies. In instances where the TEN and PTC results disagreed, the TEN results suggested the presence of dead regions whereas the PTC results did not. Conclusions The 56% agreement rate between the TEN and PTC tasks indicates that at least one of these tasks was only partially reliable as a diagnostic tool. Factors unrelated to the presence of dead regions may contribute to excess masking in TEN without producing tip shifts in PTCs. Thus it may be appropriate to view tuning curve results as more reliable in cases where TEN and PTC results disagree. The current results do not provide support for the TEN task as a reliable diagnostic tool for identification of dead regions.


Journal of Communication Disorders | 1987

An evaluation of residue features as correlates of voice disorders

Robert A. Prosek; Allen A. Montgomery; Brian E. Walden; David B. Hawkins

Two experiments were conducted to assess the correlations of residue features with some perceptual properties of voice disorders. First, 90 samples of the vowel /a/ produced by patients with various vocal pathologies were analyzed to obtain the residue features, and severity judgments of these vowel samples were obtained. The results of linear multiple regression analysis indicated that the features were highly correlated with the severity ratings. Second, an attempt was made to correlate the residue features with voice qualities. The features were calculated for the vowel /a/ produced by patients with vocal nodules, vocal fold paralysis, and vocal polyps and by normal talkers. Each vowel sample was rated on ten scales of voice quality. The results revealed high correlations among the quality scales so that discrete subject groups could not be formed. Thus, residue features may be useful in assessing the degree of vocal impairment, but their use as correlates of voice quality must await further research.


Journal of Fluency Disorders | 1979

Reaction-time measures of stutterers and nonstutterers

Robert A. Prosek; Allen A. Montgomery; Brian E. Walden; Daniel M. Schwartz

Abstract Ten adult male stutterers and ten adult male nonstutterers participated in six reaction-time tasks designed to measure manual, acoustic, and laryngeal-region response latencies. The analysis revealed statistically significant differences between the groups for the acoustic data only. The results indicated that acoustic reaction-time differences are not accounted for by the speed of the general laryngeal response.


Annals of Otology, Rhinology, and Laryngology | 1988

The Tricyclic Trimipramine in the Treatment of Subjective Tinnitus

R. Clifford Mihail; Joseph Fishburne; Joanne M. Crowley; John E. Reinwall; Brian E. Walden; Joan T. Zajtchuk

We examined 26 consecutive patients with subjective tinnitus. All subjects were treated with the tricyclic antidepressant trimipramine in a double-blind study, each subject acting as his own control. All subjects were evaluated with pure tone audiometry, site of lesion testing, and auditory brain stem evoked response. The tinnitus assessment consisted of frequency and intensity matching, the determination of masking levels, and a subjective evaluation of severity. Plasma levels of trimipramine were monitored at regular intervals, and the Zung and Millon inventories were administered at the beginning and end of each study period. Nineteen subjects completed the study. Within the trimipramine group, one reported complete disappearance of his tinnitus, eight reported improvement, three no change, and seven that tinnitus was worse. Within the placebo group, eight reported improvement, seven no change, and four that tinnitus was worse. The natural history of tinnitus is such that what has been observed may reflect the evolution of the disease itself, rather than the effect of treatment. We feel that while tricyclics may not have been shown to be effective, the placebo effect played a significant role in the results obtained.


Ear and Hearing | 2001

Effects of amplification and speechreading on consonant recognition by persons with impaired hearing.

Brian E. Walden; Kenneth W. Grant; Mary T. Cord

Objective This study sought to describe the consonant information provided by amplification and by speechreading, and the extent to which such information might be complementary when a hearing aid user can see the talker’s face. Design Participants were 25 adults with acquired sensorineural hearing losses who wore the GN ReSound BT2 Personal Hearing System binaurally. Consonant recognition was assessed under four test conditions, each presented at an input level of 50 dB SPL: unaided listening without speechreading (baseline), aided listening without speechreading, unaided listening with speechreading, and aided listening with speechreading. Confusion matrices were generated for each of the four conditions to determine overall percent correct for each of 14 consonants, and information transmitted for place of articulation, manner of articulation, and voicing features. Results Both amplification and speechreading provided a significant improvement in consonant recognition from the baseline condition. Speech-reading provided primarily place-of-articulation information, whereas amplification provided information about place and manner of articulation, as well as some voicing information. Conclusions Both amplification and speechreading provided place-of-articulation cues. The manner-of-articulation and voicing cues provided by amplification, therefore, were generally complementary to speechreading. It appears that the synergistic effect of combining the two sources of information can be optimized by amplification parameters that provide good audibility in the low-to-mid frequencies.


Ear and Hearing | 1984

Training auditory-visual speech reception in adults with moderate sensorineural hearing loss.

Allen A. Montgomery; Brian E. Walden; Daniel M. Schwartz; Robert A. Prosek

A new method of training auditory-visual speech reception is described and evaluated on an experimental group of 12 hearing-impaired adult patients. The method involves simultaneous, live presentation of the visible and acoustic components of the therapists speech, where the acoustic signal is degraded under the therapists control with a voice-activated switch. Pre-and post-training performance was assessed with an auditory-visual sentence recognition task. The performance of the experimental group, who received 10 hours of individual training, is described and compared to a control group who received a traditional aural rehabilitation program and to a group of normals who received no training. The experimental training resulted in significantly greater improvement than the control group. A description of the training, including rationale and suggestions for implementation in a clinical setting, is provided.


The Hearing journal | 2003

Real-world performance of directional microphone hearing aids

Brian E. Walden; Rauna K. Surr; Mary T. Cord

Difficulty understanding speech in the presence of background noise is a common complaint of persons with impaired hearing. Currently, directional microphones are the only option available in hearing aids that offer the potential of significantly improving the signal-to-noise ratio (SNR) for the wearer. In this issue, Andrew Dittberner has provided options for measuring the directivity of the microphone system, while Todd Ricketts has presented laboratory evidence on the effectiveness of the various microphone designs. In this article, we will address the issue of real-world benefit. The directivity of directional microphone hearing aids is typically measured in an anechoic space. The purpose of making these physical measurements in a non-reverberant enclosure is to optimize the influence of the angle of incidence of the signal. In a more reverberant environment, direct and reflected sounds from a given source could enter both microphones at comparable intensities, thereby defeating the directional processing. Similarly, behavioral measures of directionality are typically made in a sound-treated test booth. The speech signal is presented through a loudspeaker positioned at 0o azimuth and the background noise is presented from one or more additional loudspeakers that are often positioned at azimuths corresponding to the primary nulls in the polar response of the directional microphones. Again, such a testing arrangement tends to optimize the directional processing. It goes without saying that persons with impaired hearing almost never encounter listening environments in daily living that are as sound-treated as an audiometric test booth, much less that are anechoic. Hence, it is not surprising that the performance of directional microphone hearing aids in everyday listening situations generally falls short of what might be expected based on measures of the directivity index or the directional advantage. The discrepancy between the performance of directional microphones in the test booth and that typically observed in everyday listening is illustrated by the results of Walden et al.1 We obtained test booth measures of speech recognition in background noise and everyday ratings of speech intelligibility in noisy listening situations for each microphone mode of a switchable omnidirectional/directional hearing aid. To obtain the test-booth measures, we used the Connected Speech Test (CST).2,3 Test sentences were presented from a loudspeaker positioned at 0o azimuth, and a multitalker babble was presented from loudspeakers positioned at 90o, 180o, and 270o. Two test conditions were included: a 60-

Collaboration


Dive into the Brian E. Walden's collaboration.

Top Co-Authors

Avatar

Allen A. Montgomery

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Rauna K. Surr

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Mary T. Cord

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Ken W. Grant

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Daniel M. Schwartz

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

David B. Hawkins

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Cord Mt

Walter Reed Army Institute of Research

View shared research outputs
Top Co-Authors

Avatar

Van Summers

Walter Reed Army Institute of Research

View shared research outputs
Top Co-Authors

Avatar

Don W. Worthington

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Marjorie R. Leek

Walter Reed Army Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge