Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lawrence L. Feth is active.

Publication


Featured researches published by Lawrence L. Feth.


American Journal of Audiology | 2002

Background Noise Levels and Reverberation Times in Unoccupied Classrooms: Predictions and Measurements

Heather A. Knecht; Peggy B. Nelson; Gail M. Whitelaw; Lawrence L. Feth

Classrooms are often filled with deterrents that hamper a childs ability to listen and learn. It is evident that the acoustical environment in classrooms can be one such deterrent. Excessive background noise and reverberation can affect the achievement and educational performance of children with sensorineural hearing loss (SNHL) and children with normal hearing sensitivity who have other auditory learning difficulties, as well as elementary school children with no verbal or hearing disabilities. The purpose of this study was to evaluate the extent of the problem of noise and reverberation in schools. To that end, we measured reverberation times and background noise levels in 32 different unoccupied elementary classrooms in eight public school buildings in central Ohio. The results were compared with the limits recommended in the American National Standards Institute standard for acoustical characteristics of classrooms in the United States (ANSI S12.60-2002). These measurements were also compared to the external and internal criteria variables developed by Crandell, Smaldino, & Flexer (1995) to determine if a simple checklist can accurately predict unwanted classroom background noise levels and reverberation. Results indicated that most classrooms were not in compliance with ANSI noise and reverberation standards. Further, our results suggested that a checklist was not a good predictor of the noisier and more reverberant rooms.


Journal of the Acoustical Society of America | 2004

Bandwidth of spectral resolution for two-formant synthetic vowels and two-tone complex signals

Qiang Xu; Ewa Jacewicz; Lawrence L. Feth; Ashok K. Krishnamurthy

Spectral integration refers to the summation of activity beyond the bandwidth of the peripheral auditory filter. Several experimental lines have sought to determine the bandwidth of this supracritical band phenomenon. This paper reports on two experiments which tested the limit on spectral integration in the same listeners. Experiment I verified the critical separation of 3.5 bark in two-formant synthetic vowels as advocated by the center-of-gravity (COG) hypothesis. According to the COG effect, two formants are integrated into a single perceived peak if their separation does not exceed approximately 3.5 bark. With several modifications to the methods of a classic COG matching task, the present listeners responded to changes in pitch in two-formant synthetic vowels, not estimating their phonetic quality. By changing the amplitude ratio of the formants, the frequency of the perceived peak was closer to that of the stronger formant. This COG effect disappeared with larger formant separation. In a second experiment, auditory spectral resolution bandwidths were measured for the same listeners using common-envelope, two-tone complex signals. Results showed that the limits of spectral averaging in two-formant vowels and two-tone spectral resolution bandwidth were related for two of the three listeners. The third failed to perform the discrimination task. For the two subjects who completed both tasks, the results suggest that the critical region in vowel task and the complex-tone discriminability estimates are linked to a common mechanism, i.e., to an auditory spectral resolving power. A signal-processing model is proposed to predict the COG effect in two-formant synthetic vowels. The model introduces two modifications to Hermanskys [J. Acoust. Soc. Am. 87, 1738-1752 (1990)] perceptual linear predictive (PLP) model. The model predictions are generally compatible with the present experimental results and with the predictions of several earlier models accounting for the COG effect.


Phonetica | 2008

Spectral Integration of Dynamic Cues in the Perception of Syllable-Initial Stops

Robert A. Fox; Ewa Jacewicz; Lawrence L. Feth

The present experiments examine the potential role of auditory spectral integration and spectral center of gravity (COG) effects in the perception of initial formant transitions in the syllables [da]-[ga] and [tha]-[kha]. Of interest is whether the place distinction for stops in these syllables can be cued by a ‘virtual F3 transition’ in which the percept of a frequency transition is produced by a dynamically changing COG. Listeners perceived the virtual F3 transitions comparably with actual F3 transitions although the former were less salient a cue. However, in a separate experiment, static ‘virtual F3 bursts’ were not as effective as actual F3 bursts in cueing the alveolar-velar place distinction. These results indicate that virtual F3 transitions can provide phonetic information to the perceptual system and that auditory spectral integration (completed by the central auditory system) may play a significant role in speech perception.


Journal of the Acoustical Society of America | 1996

Phase independence of pitch produced by narrow‐band sounds

Huanping Dai; Quang Nguyen; Gerald Kidd; Lawrence L. Feth; David M. Green

Three listeners matched the pitch of a simple tone to that of narrow-band complex signals having different phases. The pitch matches were independent of the phases; the frequency of the simple tone approximately equaled the center of gravity of the power spectrum of each complex signal. This result is inconsistent with a model that calculates the pitch of a waveform as the average of instantaneous frequency weighted by the envelope of the waveform.


Journal of the Acoustical Society of America | 2018

Development and calibration of a smartphone application for use in sound mapping

Lawrence L. Feth; Evelyn M. Hoglund; Gus Workman; Jared Williams; Morgan Raney; Megan Phillips

Klyn and Feth (2016) reported preliminary work to use a smartphone application in a citizen-science project designed to map sound levels in Columbus, OH. Before the main project began, we discovered that the sound level measuring applications available for download had shortcomings that made them unsuitable for the proposed work. This presentation describes the development of two smartphone applications, one iOS and one Android, and the calibration procedures developed to document their accuracy and reliability. Following the suggestions of Kardous, et al. (2014, 2016), we require that the measurements be conducted using an external microphone. In use, microphone voltage is sampled for 30 seconds and processed to reflect the A-weighting scale so that sound levels are recorded as dBA values. The time and location of each sample is saved with the sound level value and can only be uploaded to the project data base if the device has been previously calibrated. Calibration stores an offset value that can be ad...


Journal of the Acoustical Society of America | 2018

Effects of incomplete feedback on response bias in auditory detection reanalyzed

Shuang Liu; Matthew H. Davis; Lawrence L. Feth

Davis (2015) reported the effects of providing incomplete feedback to listeners in a simple detection task. When feedback is limited in a detection experiment, the observer’s response criterion may deviate significantly from the optimal criterion. To approximate real-world listening conditions, a single-interval yes-no procedure was conducted with 10 feedback conditions ranging from no feedback to complete feedback. The signal was a brief 1 kHz tone; the masker was wideband white noise. The SIAM procedure (Kaernbach, 1990) was used to establish the SNR for 75% detection threshold for each listener. That level was then used in each of the feedback conditions. Davis reported a detailed descriptive analysis of the symmetry, organization, implicitness, and amount of feedback and the individual differences noted across the ten listeners. For this project, an equal-variance Gaussian signal detection framework was used to analyze the data. Model parameters were estimated via Bayesian inference. The main finding is that, as expected, complete feedback drives response criteria toward the optimum, and deviation from the optimal criterion increases as the amount of feedback decreases. While most subjects show this general trend, a few subjects maintain near-optimal behavior throughout all conditions, which is not a surprise.Davis (2015) reported the effects of providing incomplete feedback to listeners in a simple detection task. When feedback is limited in a detection experiment, the observer’s response criterion may deviate significantly from the optimal criterion. To approximate real-world listening conditions, a single-interval yes-no procedure was conducted with 10 feedback conditions ranging from no feedback to complete feedback. The signal was a brief 1 kHz tone; the masker was wideband white noise. The SIAM procedure (Kaernbach, 1990) was used to establish the SNR for 75% detection threshold for each listener. That level was then used in each of the feedback conditions. Davis reported a detailed descriptive analysis of the symmetry, organization, implicitness, and amount of feedback and the individual differences noted across the ten listeners. For this project, an equal-variance Gaussian signal detection framework was used to analyze the data. Model parameters were estimated via Bayesian inference. The main finding ...


Timing & Time Perception | 2015

Imagined Temporal Groupings Tune Oscillatory Neural Activity for Processing Rhythmic Sounds

Brandon T. Paul; Per B. Sederberg; Lawrence L. Feth

Temporal patterns within complex sound signals, such as music, are not merely processed after they are heard. We also focus attention to upcoming points in time to aid perception, contingent upon regularities we perceive in the sounds’ inherent rhythms. Such organized predictions are endogenously maintained as meter— the patterning of sounds into hierarchical timing levels that manifest as strongand weakevents. Models of neural oscillations provide potential means for how meter could arise in the brain, but little evidence of dynamic neural activity has been offered. To this end, we conducted a study instructing participants to imagine two-based or three-based metric patterns over identical, equally-spaced sounds while we recorded the electroencephalogram (EEG). In the three-based metric pattern, multivariate analysis of the EEG showed contrasting patterns of neural oscillations between strong and weak events in the delta (2–4 Hz) and alpha (9–14 Hz), frequency bands, while theta (4–9 Hz) and beta (16–24 Hz) bands contrasted two hierarchically weaker events. In two-based metric patterns, neural activity did not drastically differ between strong and weak events. We suggest the findings reflect patterns of neural activation and suppression responsible for shaping perception through time.


Hearing Research | 2012

Noise-induced changes in cochlear compression in the rat as indexed by forward masking of the auditory brainstem response.

Eric C. Bielefeld; Evelyn M. Hoglund; Lawrence L. Feth

The current study was undertaken to investigate changes in forward masking patterns using on-frequency and off-frequency maskers of 7 and 10xa0kHz probes in the Sprague-Dawley rat. Off-frequency forward masking growth functions have been shown in humans to be non-linear, while on-frequency functions behave linearly. The non-linear nature of the off-frequency functions is attributable to active processing from the outer hair cells, and was therefore expected to be sensitive to noise-induced cochlear damage. For the study, nine Sprague-Dawley rats auditory brainstem responses (ABRs) were recorded with and without forward maskers. Forward masker-induced changes in latency and amplitude of the initial positive peak of the rats auditory brainstem responses were assessed with both off-frequency and on-frequency maskers. The rats were then exposed to a noise designed to induce 20-40xa0dB of permanent threshold shift. Twenty-one days after the noise exposure, the forward masking growth functions were measured to assess noise-induced changes in the off-frequency and on-frequency forward masking patterns. Pre-exposure results showed compressive non-linear masking effects of the off-frequency conditions on both latency and amplitude of the auditory brainstem response. The noise rendered the off-frequency forward masking patterns more linear, consistent with human behavioral findings. On- and off-frequency forward masking growth functions were calculated, and they displayed patterns consistent with human behavioral functions, both prior to noise and after the noise exposure.


Auditory Physiology and Perception#R##N#Proceedings of the 9th International Symposium on Hearing Held in Carcens, France, on 9–14 June 1991 | 1992

Identification of Initial Stop Consonants Processed by the Patterson–Holdsworth ASP Model

Robert A. Fox; Lawrence L. Feth

Publisher Summary This chapter describes the types of dynamic auditory cues that may be important in the perception of syllable-initial stop consonants. Auditory representations are not an adequate or reasonable reflection of actual dynamic auditory processing. The 1/6 octave filter bank and mel-scale frequency axis are used to approximate the peripheral filtering of the normal ear. The auditory speech processing (ASP) front-end closely simulates the peripheral filtering of the human auditory system. Rather than processed, running spectra, the output of the ASP model is a stabilised auditory image (SAI), which preserves the spectral dynamics of time-varying, complex sounds in a display that mimics the pattern of excitation within the nervous system. The chapter discusses whether a sufficient set of the dynamic cues are available in the SAI and recoverable by viewers, which could signal both the place-of-articulation and the voicing distinctions of initial stop consonants in American English.


Journal of Speech Language and Hearing Research | 1992

Temporal resolution in normal-hearing and hearing-impaired listeners using frequency-modulated stimuli.

John P. Madden; Lawrence L. Feth

Collaboration


Dive into the Lawrence L. Feth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge