Kim S. Abouchacra
American University of Beirut
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kim S. Abouchacra.
Ear and Hearing | 2008
Abdul-Latif Hamdan; Kim S. Abouchacra; Adina Zeki Al Hazzouri; Georges Zaytoun
Objectives: The objective of this study was to determine whether transient-evoked otoacoustic emissions (TEOAEs) measured in a group of normal-hearing professional singers, who were frequently exposed to high-level sound during rehearsals and performances, differed from those measured in age- and gender-matched normal-hearing non-singers, who were at minimal risk of hearing loss resulting from excessive sound exposure or other risk factors. Design: Twenty-three normal-hearing singers (NH-Ss), 23 normal-hearing controls (NH-Cs), and 9 hearing-impaired singers (HI-Ss) were included. Pure-tone audiometry confirmed normal-hearing thresholds (≥15 dB HL) at 0.5, 1.0, 2.0, 3.0, 4.0, 6.0, and 8.0 kHz in NH-Ss and NH-Cs, and confirmed mild, high frequency, sensorineural hearing loss in HI-Ss (HI-Ss were included only to estimate sensitivity and specificity values for preliminary pass or fail criteria that could be used to help identify NH-Ss at risk for music-induced hearing loss). TEOAEs were measured twice in all ears. TEOAE signal to noise ratio (S/N) and reproducibility were examined for the whole wave response, and for frequency bands centered at 1.0, 1.4, 2.0, 2.8, and 4.0 kHz. Results: Moderate to high correlations were found between test and retest TEOAE responses for the three groups. However, absolute test–retest differences revealed standard deviations that were two to three times larger than those reported previously, with the majority of the variability occurring for the 1.0 kHz band. As such, only the best TEOAE response (B-TEOAE) from the two measurements in each ear was used in further analyses, with data from the 1.0 kHz band excluded. With one exception, within-group comparisons of B-TEOAE S/N and reproducibility across ears and gender revealed no statistically significant differences for either NH-Ss or NH-Cs. The only significant within-group difference was between left and right ears of NH-C females for S/Ns measured in the 2.0 kHz band, where median responses from right ears were found to be higher than left ears. Across-group comparisons of B-TEOAEs revealed lower median S/N and reproducibility values for NH-Ss compared with NH-Cs for the whole wave response and 1.4 kHz band. For the 2.0 kHz band, reproducibility was similar for the normal-hearing groups but median S/N was found to be lower for NH-Ss. No significant differences in S/N or reproducibility were found between normal-hearing groups for the 2.8 and 4.0 kHz bands. Using data from NH-Cs and HI-Ss to establish sensitivity and specificity values for various TEOAE pass or fail criteria, six preliminary criteria were identified as having sensitivity and specificity values ≥90%. When these criteria were applied to NH-Ss, the number of NH-S ears passing ranged from 57% to 76%, depending on the criteria used to judge the NH-S ears, which translates into 24% to 43% of ears failing. Conclusions: Although TEOAE responses were measurable in all singers with normal audiometric thresholds, responses were less robust than those of NH-Cs. The findings suggest that subtle cochlear dysfunction can be detected with TEOAE measurement in a subset of normal-hearing professional singers. Although preliminary, the study findings highlight the importance of pass or fail criterion choice on the number of ears that will be identified as “at risk” for music-induced hearing loss.
Human Factors | 2001
Kim S. Abouchacra; Jean Breitenbach; Timothy Mermagen; Tomasz Letowski
This study assessed the effects of spatialized sound presentation on a listeners ability to monitor target (T) messages in the presence of competing (C) messages and high-level (110 dB[A]) background noise (BGN). In a simulated military environment, 8 participants wore two-channel, active noise reduction (ANR) equipped helmets and listened to combinations of T and C messages (89 dB[A] at the ear). T messages were presented synchronously with 0, 1, 2, and 3 C messages in four listening modes: (a) BGN + diotic, (b) BGN + dichotic, (c) BGN + spatial audio, and (d) quiet + spatial audio. Best overall performance occurred in the spatialized modes (c and d) and poorest in the diotic mode (a). As expected, speech recognition was better in quiet than in BGN when multiple C messages were present. Findings indicate that message spatialization in acoustic space improves auditory performance during times of heavy message competition, even in high-level noise. The proposed technology has numerous applications, such as multichannel communications in tactical operations centers, monitoring of complex security systems, and air traffic control.
International Journal of Audiology | 2011
Kim S. Abouchacra; Joan Besing; Janet Koehnke; Tomasz Letowski
Abstract Objective: To determine the effects of room reverberation on target sentence recognition in the presence of 0-to-3 synchronous masking sentences. Design: Target and masker sentences were presented through four loudspeakers (±90° and ±45° azimuth; 1m from the listener) in rooms having reverberation times (RT) of 0.2, 0.4, 0.6, and 1.1 s. Study Sample: Four groups of 13 listeners each participated in the study (N = 52). Results: In rooms with RTs of 0.2, 0.4, and 0.6 s, mean speech recognition scores (SRSs) were similar, with scores ranging from 96–100%, 90–95%, 75–80%, and 53–60%, when 0, 1, 2, and 3 competing sentences were present, respectively. However, in the room with a RT = 1.1 s, SRSs deteriorated significantly faster as the number of competing sentences increased; mean scores were 93%, 73%, 26%, and 10%, in the 0, 1, 2, 3, competing sentence condition, respectively. The majority of errors in SRSs (98%) resulted from listeners reporting words presented in masking sentences along with those in target sentences (mixing errors). Conclusions: Results indicate that reverberation has a similar influence on SRSs measured in multi-talker environments, when room reverberation is ≤ 0.6 s. However, SRSs are dramatically reduced in the room with a RT = 1.1 s, even when only one competing talker is present. Sumario Objetivo: Determinar los efectos de la reverberación de un cuarto para el reconocimiento de oraciones blanco, en presencia sincrónica de 0–3 oraciones enmascarantes. Diseño: Las oraciones blanco y las enmascarantes se presentaron por medio de cuatro altoparlantes (+90° y ±45° azimuth; a 1m del oyente) en cuartos que tenían tiempos de reverberación (RT) de 0.2, 0.4, 0.6 y 1.1 seg. Muestra de estudio: Participaron en este estudio, cuatro grupos de 13 oyentes cada uno (N=52). Resultados: En cuartos con RT de 0.2, 0.4 y 0.6 seg, las puntuaciones medias de reconocimiento del habla (SRS) fueron similares, con puntuaciones que variaron de 96–100%, 90–95%, 75–80%, y 53–60%, cuando se presentaron, respectivamente, 0, 1, 2, y 3 oraciones competitivas. Sin embargo, en el cuarto con RT=1.1 seg, las SRS se deterioraron significativamente más rápido, conforme el número de frases competitivas se incrementó; las puntuaciones medias fueron de 93%, 73%, 26%, y 10%, en las condiciones de 0, 1, 2 y 3 oraciones competitivas, respectivamente. La mayoría de los errores en las SRS (98%) resultaron de oyentes que reportaron palabras presentes en las oraciones enmascarantes junto con aquellas de las oraciones blanco. (mezcla de errores). Conclusiones: Los resultados indican que la reverberación tiene una influencia similar en las SRS, medidas en ambientes de hablantes múltiples, cuando la reverberación es ≦0.6 sec. Sin embargo, las SRS se reducen dramáticamente en un cuarto con un RT=1.1 seg, incluso cuando solamente está presente un mensaje competitivo.
Journal of the Acoustical Society of America | 2010
Kim S. Abouchacra; Janet Koehnke; Joan Besing; Tomasz Letowski
Monitoring multi‐channel radio communication is a common activity for many military and civilian professionals, such as those who work in tactical operation centers, command‐control towers, and voice interception facilities. In such environments, communication is critical and errors in speech understanding could cost time, equipment, and even loss of life. When an individual has a hearing loss, these costs have the potential to increase substantially. The purpose of this study was to compare speech recognition scores (SRSs) of listeners with normal hearing and noise‐induced hearing loss, when target messages were presented with two, three, or four interfering messages through various spatial and non‐spatial auditory displays. For both groups, SRSs decreased for all display types as the number of competing messages increased. However, listeners with hearing impairment had significantly poorer SRSs than listeners with normal hearing, with decreases in scores ranging from 10%–25% in all conditions. While bot...
Journal of the Acoustical Society of America | 2010
Kim S. Abouchacra; Solara Sinno; Tomasz Letowski
Increasing complexity of military vehicles, high noise levels, and the command and control demands of modern warfare put high physical and mental load on crews operating the vehicles. The need for indirect driving and simultaneous control of various robotic systems demands multisensory interfaces between the crew and the operated systems. One of the promising types of new interfaces is a three‐dimensional (3‐D) audio interface that presents warning signals and tactical messages at spatial locations associated with the content of the emitted signals or messages. However, not all the information can be displayed through a 3‐D interface with needed resolution and sound quality and either too little or too much information can be equally detrimental to the user. This paper is the summary of data obtained through a user’s survey administered to 105 tankers regarding expected functionality of 3‐D interfaces in armored vehicles. The results of the survey strongly support the value of the 3‐D audio interface for ...
Journal of the Acoustical Society of America | 1997
Joan Besing; Janet Koehnke; Kim S. Abouchacra; Tuyen V. Tran
In everyday listening situations, binaural information enhances the ability to understand speech messages in multitalker environments. For applications to virtual auditory technology in such environments, the importance of preserving characteristics of the natural listening environment is not clear. The purpose of this study was to assess listeners’ ability to monitor target (T) messages in the presence of synchronous competing (C) messages in an anechoic and a reverberant environment in three modes: (1) through loudspeakers at ±45° and ±90° with the subject seated in the actual room; (2) through the same loudspeakers with the subject listening remotely to the stimulus presented to a manikin in the actual room; and (3) with the environment simulated to create virtual sources at ±45° and ±90° azimuth under earphones. Fourteen subjects listened to messages selected from four lists of 2034 ten‐syllable sentences. In every listening condition, the T message was presented 40 times to the subject in the presence of 0, 1, 2, and 3 C messages; subjects recorded the T messages. Results demonstrate that the ability to understand the T messages decreases as the number of C messages increases. As expected, speech recognition is better in the anechoic environment than the reverberant environment when C messages are present. Performance is degraded whenever the subjects do not listen with their own ears. [Work supported by the U. S. Army Research Lab.]
Journal of the Acoustical Society of America | 1995
Tuyen V. Tran; Tomasz Letowski; Kim S. Abouchacra
A series of experiments was conducted to evaluate 3D auditory beacons to be used by drivers of remote‐controlled vehicles. The auditory beacons were generated from an external sound source, conditioned using the ConvolvotronTM and then presented through Sennheiser HD‐580 earphones. In addition, a pink noise masker was presented through an overhead loudspeaker at a level of 80 dBA measured under the earphones. Ten listeners with normal hearing were asked to (1) judge sound quality of nine auditory beacons, and (2) move a beacon from a predetermined starting location to a position directly in front of the listener in the 3D display. The beacons differed regarding type of sound as well as rate and mode (continuous versus noncontinuous and single versus oscillating sound source) presentation. Results of the experiments indicate that listeners preferred (1) continuous versus interrupted presentation of the beacons, (2) nonspeech versus speech beacons, and (3) a rate of 1.1 repetitions per second over 0.7 or 2....
Journal of the Acoustical Society of America | 1994
Kim S. Abouchacra; Tomasz Letowski
The purpose of this investigation was to explore auditory spatial hearing perception in environments containing directional noise. For 40 normal hearing subjects, detection thresholds were measured for a target signal that was spatially separated from directional noise (65 dBA) by either 0°, 45°, 135°, or 180° azimuth. Following these measurements, localization accuracy and confidence ratings were determined for the target signal presented in directional noise at 0 dB SL, 6 dB SL, 12 dB SL, and 18 dB SL. As expected, detection results showed that as the spatial separation between the target signal and noise source increased, the target signal was more easily detected, with maximum improvements amounting to as much as 13 dB. As sensation level increased, localization accuracy improved and confidence in localization responses increased. Unlike detection, the extent of improvement in localization performance depended more on the spatial location of the target signal than on the amount of spatial separation b...
Journal of the Acoustical Society of America | 1994
Kim S. Abouchacra; Pamela A. Mundis; Ellen C. Haas; Tomasz Letowski; Laurie Myers
An exploratory study was conducted to determine which acoustical components influence the detection and recognition of 30 familiar sounds (FS). The sounds were categorized as human‐, animal‐, environment‐, or object‐producing sounds. Initially, detection thresholds for 20 normal hearing subjects were measured for the 30 FS. Following detection measures, subjects were trained to recognize FS that were presented at a comfortable listening level (40 dB HL). Recognition thresholds were then gathered, beginning at each sound’s detection threshold. Specifically, at detection threshold, a sound was presented and subjects were required to identify the sound. If they could not recognize the sound, listeners were asked to describe ‘‘what’’ was heard (i.e., temporal or spectral components). In increments of 2 dB, the sound was presented and subject’s response recorded. This procedure continued until the level of the sound reached 40 dB HL (training level). Results revealed that the decibel range between detection an...
Journal of the Acoustical Society of America | 1997
Kim S. Abouchacra; Tomasz Letowski
People make judgments of sound every day. Sound judgments, like judgments of any other entity, can vary in their generality (general–specific) and the importance of the judged characteristic (primary–secondary). The development of sound quality criteria systems and jury testing protocols requires knowledge of both the generality and importance of specific sound assessment criteria. Smith and Letowski [J. Acoust. Soc. Am. 89, 1938 (1991)] reported intuitive generality and importance ratings of selected sound quality attributes measured on naive listeners and proposed a set of sound quality attribute definitions. The object of the present study was: (1) to determine whether listeners maintain their intuitive generality and importance ratings for selected sound quality attributes following exposure to the proposed definitions, and (2) to assess the effect of jury testing experience on the preservation of intuitive generality and importance ratings. The results support the notion that while some sound attribu...