Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Linda Kozma‐Spytek is active.

Publication


Featured researches published by Linda Kozma‐Spytek.


Journal of the Acoustical Society of America | 1994

Quality ratings for frequency‐shaped peak‐clipped speech

James M. Kates; Linda Kozma‐Spytek

Peak clipping is a common form of distortion in hearing aids and can reduce the subjective quality of the amplified speech. In a typical hearing aid, frequency-response shaping precedes symmetric peak clipping. The effect of peak clipping on speech quality ratings was therefore studied using sentence test materials that were processed using different frequency response contours and then clipped at different clipping thresholds. The quality of each processed sentence was rated on a ten-point scale by normal-hearing subjects. The experimental results indicate that the clipping threshold, and the interaction of the frequency-response shaping with the clipping threshold, significantly affect speech quality. It is also shown that the distortion effects on speech quality can be modeled by a distortion index computed from the magnitude-squared coherence of the speech-processing system in response to a shaped-noise input signal.


Journal of the Acoustical Society of America | 1991

VCVs vs CVCs for stop/fricative distinctions by hearing‐impaired and normal‐hearing listeners

Sally G. Revoile; Linda Kozma‐Spytek; Lisa Holden‐Pitt; J. M. Pickett; Janet Droge

Moderately to profoundly hearing-impaired (n = 30) and normal-hearing (n = 6) listeners identified [p, k, t, f, theta, s] in [symbol; see text], and [symbol; see text]s tokens extracted from spoken sentences. The [symbol; see text]s were also identified in the sentences. The hearing-impaired group distinguished stop/fricative manner more poorly for [symbol; see text] in sentences than when extracted. Further, the groups performance for extracted [symbol; see text] was poorer than for extracted [symbol; see text] and [symbol; see text]. For the normal-hearing group, consonant identification was similar among the syllable and sentence contexts.


Journal of the Acoustical Society of America | 2007

Transformation of live versus recorded speech from the mouth to the open or occluded ear

Dragana Barac‐Cikoja; Linda Kozma‐Spytek; Stephanie Adamovich

Auditory self‐monitoring studies require the participants’ speech feedback to be manipulated experimentally and then presented to the ear. The acoustic levels of the altered speech must be the same as those under normal speech feedback conditions when speech is transmitted to the ear live, directly from the mouth. Therefore, it is critical to understand the transfer function between the mouth and the eardrum. To further this understanding, live and played‐back self‐generated speech for two female and two male individuals were recorded using a probe tube microphone inserted into the ear canal at a fixed distance from the tragus. The played‐back speech samples were delivered via a B&K artificial mouth, at the sound pressure level calibrated to match the level at the lips. The artificial mouth was placed offset to the participants right side and in line with the x and z axes of his/her mouth position. Recordings were conducted with the ears either open or sealed with ER‐3A insert earphones. In addition, the ...


Journal of the Acoustical Society of America | 1999

Real‐ear measurement of hearing‐aid interference from digital wireless telephones

Harry Levitt; Judy Harkins; Linda Kozma‐Spytek; Eddy Yeung

Two studies were performed. The first measured hearing‐aid interference under field conditions. Signal levels in the ear canal were measured using an in‐the‐canal probe system, the output of which was stored digitally in a laptop computer. Both bystander and user interference were measured as well as the user’s ratings of intelligibility, annoyance, and usability (of the digital wireless telephone). The levels of user‐interference were found to be unacceptably high for the large majority of hearing‐aid wearers. In contrast, bystander interference was at a relatively low level under conditions typical of telephone use. The second experiment controlled the level of interference by means of the test mode in the digital wireless telephone. Speech‐to‐interference ratios were measured in the ear canal for various levels of intelligibility and usability, as rated by the subjects. Data were obtained for 37 subjects on each of three digital wireless technologies (PC1900, TDMA, CDMA). [Research supported by NIDRR.]


Journal of the Acoustical Society of America | 1994

Predicting hearing‐impaired listeners’ use of acoustic cues for consonant recognition

Linda Kozma‐Spytek; Peggy B. Nelson; Sally G. Revoile; Lisa Holden‐Pitt

Reduced consonant recognition and large inter‐listener variability among individuals with sensorineural hearing loss are neither well characterized nor understood. Previous attempts to account for consonant recognition using threshold and suprathreshold measures have produced equivocal results. Classifications of listeners according to performance patterns for acoustic cue use are lacking. The present study classifies hearing‐impaired listeners according to their use of information in successive vowel and consonant segments for identification of consonants. Seventy hearing‐impaired (moderate to profound) and 19 normal‐hearing listeners identified /n/, /l/, and /d/ in spoken VCV’s extracted from connected speech. The VCV’s were presented unmodified and with consonant and vowel‐transition segment deletions. Percent information transmitted scores for each consonant and stimulus condition were submitted to a hierarchical cluster analysis to classify all listeners into homogeneous groups based on the extent to...


Journal of the Acoustical Society of America | 1991

Impaired‐ and normal‐hearing listeners’ recognition of [b, d, g, v, z, edh] in [ΛCΛ] isolated versus in sentence contexts.

Linda Kozma‐Spytek; S. Revoile; Lisa Holden‐Pitt; J. M. Pickett

Recognition of consonants in connected speech has received limited study for severely hearing‐impaired listeners. Such listeners were tested (n=7) for identification of [b, d, g, v, z, edh] in [ΛCΛ] tokens, each embedded in a spoken carrier sentence (male talker) and extracted from the sentences. The severely hearing‐impaired listeners’ recognition was poorer for the consonants in the [ΛCΛ]’s when in the sentence context than when extracted. In contrast, neither a group of moderately impaired (n=11) nor of normal‐hearing listeners (n=6) showed a significant effect of context on consonant recognition. Performance by the severely hearing‐impaired listeners was not improved by insertion of an artificial pause preceding the word containing the test consonant in the sentence.


Journal of Speech Language and Hearing Research | 1996

Quality Ratings for Frequency-Shaped Peak-Clipped Speech: Results for Listeners with Hearing Loss.

Linda Kozma‐Spytek; James M. Kates; S. Revoile


Journal of Rehabilitation Research and Development | 2005

An evaluation of digital cellular handsets by hearing aid users.

Linda Kozma‐Spytek; Judith Harkins


Seminars in Hearing | 2005

In-the-Ear Measurements of Interference in Hearing Aids from Digital Wireless Telephones

Harry Levitt; Linda Kozma‐Spytek; Judith Harkins


Ear and Hearing | 1995

Acoustic-phonetic context considerations for speech recognition testing of hearing-impaired listeners.

Sally G. Revoile; Linda Kozma‐Spytek; Lisa Holden‐Pitt; J. M. Pickett; Janet Droge

Collaboration


Dive into the Linda Kozma‐Spytek's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sally G. Revoile

United States Department of Veterans Affairs

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harry Levitt

City University of New York

View shared research outputs
Top Co-Authors

Avatar

James M. Kates

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Revoile

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eddy Yeung

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge