Matthew J. Makashay
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew J. Makashay.
Journal of the Acoustical Society of America | 2004
Matthew J. Makashay
This study investigates whether there are systematic individual differences in the perceptual weighting of frequency and durational speech cues for vowels and fricatives (and their nonspeech analogs) among a dialectally homogeneous group of speakers. Listeners performed AX discrimination for four separate types of stimuli: sine wave vowels, narrow‐band fricatives, synthetic vowels, and synthetic fricatives. Duration and F1 frequency were manipulated for the vowels in heed and hid, and duration and frequency of the fricative centroid in the F5 region were manipulated for the fricatives in bath and bass. Dialect production and perception tasks were included to ensure that subjects were not from dissimilar dialects. Multidimensional scaling results indicated that there are subgroups within a dialect that attend to frequency and duration differently, and that not all listeners use these cues consistently across dissimilar phones. If subgroups can have different perceptions of speech despite similar production...
Journal of The American Academy of Audiology | 2015
Melissa Kokx-Ryan; Julie I. Cohen; Mary T. Cord; Therese C. Walden; Matthew J. Makashay; Benjamin Sheffield; Douglas S. Brungart
BACKGROUNDnFrequency-lowering (FL) algorithms are an alternative method of providing access to high-frequency speech cues. There is currently a lack of independent research addressing: (1) what functional, measureable benefits FL provides; (2) which, if any, FL algorithm provides the maximum benefit, (3) how to clinically program algorithms, and (4) how to verify algorithm settings.nnnPURPOSEnTwo experiments were included in this study. The purpose of Experiment 1 was to (1) determine if a commercially available nonlinear frequency compression (NLFC) algorithm provides benefit as measured by improved speech recognition in noise when fit and verified using standard clinical procedures; and (2) evaluate the impact of acclimatization. The purpose of Experiment 2 was to (1) evaluate the benefit of using enhanced verification procedures to systematically determine the optimal application of a prototype NLFC algorithm, and (2) determine if the optimized prototype NLFC settings provide benefit as measured by improved speech recognition in quiet and in noise.nnnRESEARCH DESIGNnA single-blind, within-participant repeated measures design in which participants served as their own controls.nnnSTUDY SAMPLEnExperiment 1 included 26 participants with a mean age of 68.3 yr and Experiment 2 included 37 participants with a mean age of 68.8 yr. Participants were recruited from the Audiology and Speech Pathology Center at Walter Reed National Military Medical Center in Bethesda, MD.nnnINTERVENTIONnParticipants in Experiment 1 wore bilateral commercially available hearing aids fit using standard clinical procedures and clinician expertise. Participants in Experiment 2 wore a single prototype hearing aid for which FL settings were systematically examined to determine the optimum application. In each experiment, FL-On versus FL-Off settings were examined in a variety of listening situations to determine benefit and possible implications.nnnDATA COLLECTION AND ANALYSISnIn Experiment 1, speech recognition measures using the QuickSIN and Modified Rhyme Test stimuli were obtained at initial bilateral fitting and 3-5 weeks later during a follow-up visit. In Experiment 2, Modified Rhyme Test, /sə/, /∫ə/ consonant discrimination task, and dual-task cognitive load speech recognition performance measures were conducted. Participants in Experiment 2 received four different systematic hearing aid programs during an initial visit and speech recognition data were collected over 2-3 follow-up sessions.nnnRESULTSnSome adults with hearing loss obtained small-to-moderate benefits from implementation of FL, while others maintained performance without detriment in both experiments. There was no significant difference among FL-On settings systematically obtained in Experiment 2. There was a modest but significant age effect in listeners of both experiments that indicated older listeners (>65 yr) might benefit more on average from FL than younger listeners. In addition, there were reliable improvements in the intelligibility of the phonemes /ŋ/ and /b/ for both groups, and /ð/ for older listeners from the FL in both experiments.nnnCONCLUSIONSnAlthough the optimum settings, application, and benefits of FL remain unclear at this time, there does not seem to be degradation in listener performance when FL is activated. The benefits of FL should be explored in older adult (>65 yr) listeners, as they tended to benefit more from FL applications.
Clinical Linguistics & Phonetics | 2015
Matthew J. Makashay; Kevin R. Cannard; Nancy Pearl Solomon
Abstract This study tested the assumption that speech is more susceptible to fatigue than normal in persons with dysarthria. After 1u2009h of speech-like exercises, participants with Parkinsons disease (PD) were expected to report increased perceptions of fatigue and demonstrate fatigability by producing less precise speech with corresponding acoustic changes compared to neurologically normal participants. Twelve adults with idiopathic PD and 13 neurologically normal adults produced sentences with multiple lingual targets before and after six 10-min blocks of fast syllable or word productions. Both groups reported increasing self-perceived fatigue over time, but trained listeners failed to detect systematic differences in articulatory precision or speech naturalness between sentences produced before and after speech-related exercises. Similarly, few systematic acoustic differences occurred. These findings do not support the hypothesis that dysarthric speakers are particularly susceptible to speech-related fatigue; instead, speech articulation generally appears to be resistant to fatigue induced by an hour of moderate functional exercises.
Journal of the Acoustical Society of America | 2004
Nancy Pearl Solomon; Matthew J. Makashay; Benjamin Munson
During speech, movements of the mandible and the tongue are interdependent. For some research purposes, the mandible may be constrained to ensure independent tongue motion. To examine specific spectral characteristics of speech with different jaw positions, ten normal adults produced sentences with multiple instances of /t/, /s/, /∫/, /i/, /ai/, and /ɔi/. Talkers produced stimuli with the jaw free to vary, and while gently biting on 2‐ and 5‐mm bite blocks unilaterally. Spectral moments of /s/ and /∫/ frication and /t/ bursts differed such that mean spectral energy decreased, and diffuseness and skewness increased with bite blocks. The specific size of the bite block had minimal effect on these results, which were most consistent for /s/. Formant analysis for the vocoids revealed lower F2 frequency in /i/ and at the end of the transition in /ai/ when bite blocks were used; F2 slope for diphthongs was not sensitive to differences in jaw position. Two potential explanations for these results involve the phy...
Journal of the Acoustical Society of America | 1999
Matthew J. Makashay
This study attempts to determine if speech perception varies across dialects as production does. Synthetic stimuli based on one talker from Binghamton, NY (northern US) and one from Birmingham, AL (southern US) were presented to subjects from both regions. The stimuli were 18 vowel continua in CVC context. These 10‐point continua had initial tokens containing nonhigh vowels, such as hot /hat/, with formant values and durations manipulated to result in final tokens containing diphthongs, such as hot /hɪt/. Other continua contained endpoints such as sad /saed/ and side /saɪd/, or bought /bt/ and bout/baut/. The production of diphthongs differs between these dialects, varying from the Canadian raising of the North to the monophthongization of the South. The question to be resolved in this study is whether perception of diphthongs differs between the dialects. It was found in a direct boundary estimation task that southern subjects perceived southern tokens as diphthongs earlier in the continua than northern s...
Journal of the Acoustical Society of America | 1998
Matthew J. Makashay; Keith Johnson
Recent research has suggested that vowel category learning and the perceptual magnet effect are the natural consequences of auditory neural map formation [F. H. Guenther and M. N. Gjaja, J. Acoust. Soc. Am. 100, 1111–1121 (1996)]. As these effects are language specific, the organization of the perceptual map crucially is dependent on the input received during learning. Guenther and Gjaja implemented a working model of a self‐organizing neural map to show that, given input from Gaussian distributions around vowel category centers, unsupervised training leads to a warping of the perceptual space toward the category centers. This work replicates Guenther and Gjaja’s results and provides a more realistic test of the model by using real vowel formant data instead of idealized distributions. Single and multiple male talker vowel data were used to train auditory neural maps. Single‐talker simulations resulted in map organization similar to Guenther and Gjaja’s results. However, no clear vowel category clusters e...
Journal of the Acoustical Society of America | 2016
Matthew J. Makashay; Nancy Pearl Solomon; Van Summers
Hearing-aid amplification enhances speech intelligibility for many hearing-impaired (HI) listeners in quiet, but often provides no clear benefit in noise. Requesting that talkers use clear speech is one strategy to overcome these listening challenges. Paradoxically, one feature of clear speech is a shift to higher frequencies, which may move speech energy into a frequency range that is inaudible or has more distortion for certain HI listeners. Conversely, casual conversational speech may shift speech energy into a lower frequency range that is more audible or has less distortion. This study examined the intelligibility of 21 casually- and clearly-spoken American English coda consonants in nonsense syllables for 9 aided normal-hearing and 18 aided HI listeners. As expected, most clear-speech consonants yielded higher recognition scores. However, certain phonological processes common in casual speech, namely affrication and palatalization, generated significantly higher scores than their clear counterparts ...
Journal of the Acoustical Society of America | 2014
Douglas S. Brungart; Matthew J. Makashay; Van Summers; Benjamin Sheffield; Thomas A. Heil
Pure-tone audiometric thresholds are the gold standard for assessing hearing loss, but most clinicians agree that the audiogram must be paired with a speech-in-noise test to make accurate predictions about how listeners will perform in difficult auditory environments. This study evaluated the effectiveness of a six-alternative closed-set speech-in-noise test based on the Modified Rhyme Test (House, 1965). This 104-word test was carefully constructed to present stimuli with and without ITD-based spatial cues at two different levels and two different SNR values. This allows the results to be analyzed not only in terms of overall performance, but also in terms of the impact of audibility, the slope of the psychometric function, and the amount of spatial release from masking for each individual listener. Preliminary results from normal and hearing-impaired listeners show that the increase in overall level from 70 dB to 78 dB that was implemented in half of the trials had little impact on performance. This suggests that the test is relatively effective at isolating speech-in-noise distortion from the effects of reduced audibility at high frequencies. Data collection is currently underway to compare performance in the MRT test to performance in a matrix sentence task in a variety of realistic operational listening environments. [The views expressed in this abstract are those of the authors and do not necessarily reflect the official policy or position of the DoD or the US Government.]
Journal of the Acoustical Society of America | 2012
Matthew J. Makashay; Nancy Pearl Solomon; Van Summers
For many hearing-impaired (HI) listeners, hearing-aid amplification provides near-normal speech recognition in quiet. Nonetheless, many of these same listeners show large speech deficits, relative to normal-hearing (NH) listeners, that are not effectively addressed via amplification in noisy listening conditions. One compensating strategy HI listeners use is to ask talkers to speak clearly. However, as one of the features of clear speech is a shift to higher frequencies, HI listeners may not benefit as much as NH listeners if the new frequencies are outside their audible range. This study examined the intelligibility of conversationally- and clearly-spoken coda consonants in nonsense syllables. These free-variant allophones of 21 American English consonants were produced in three phonological environments: syllable (utterance) final; syllable final followed by schwa; and syllable final followed by palatal approximant and schwa. The stimuli were presented in broadband noise and in quiet to NH and HI listen...
Journal of the Acoustical Society of America | 2010
Van Summers; Joshua G. Bernstein; Matthew J. Makashay; Golbarg Mehraei; Sarah Melamed; Marjorie R. Leek; Frederick J. Gallun; Michelle R. Molis
Suprathreshold distortions in auditory processing contribute to speech recognition deficits for hearing‐impaired (HI) listeners in noise. Outer hair cell damage and attendant reductions in frequency selectivity and peripheral compression may contribute to these deficits. Reduced sensitivity to spectral or temporal modulations or temporal fine structure (TFS) information may also play a role. Eight normal‐hearing and 18 HI listeners were tested in psychoacoustic tasks to assess frequency selectivity (notched‐noise), peripheral compression (temporal masking curves), TFS sensitivity [frequency modulation (FM) detection in the presence of random amplitude modulation], and spectrotemporal modulation (STM) sensitivity (spectral‐temporal “ripple” detection). Performance was examined at 500, 1000, 2000, and 4000 Hz at several presentation levels. Listeners were also tested on sentence recognition in stationary and modulated noise (92‐dB sound pressure level signal level; −6‐, −3‐, 0‐, and +3‐dB SNRs). HI listener...