Shae D. Morgan
University of Utah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shae D. Morgan.
Journal of the Acoustical Society of America | 2014
Shae D. Morgan; Sarah Hargus Ferguson
In the laboratory, talkers asked to speak as though talking to an individual with hearing loss modify their speech from their everyday conversational style to a “clear” speaking style. In the real world, individuals with hearing loss sometimes complain that their frequent communication partners seem to be shouting at them, while the communication partners insist that they are just trying to speak more clearly. Acoustic analyses have contrasted angry speech with neutral speech and clear speech with conversational speech. A comparison of these analyses reveals that angry speech and clear speech share several acoustic modifications. For example, both clear speech and angry speech show increased energy at high frequencies. The present study will explore whether clear speech sounds angry to listeners. Young adult listeners with normal hearing will be presented with conversational and clear sentences from the Ferguson Clear Speech Database (Ferguson, 2004) and asked to assign an emotion category to each sentenc...
Transportation Research Record | 2018
Douglas Getty; Francesco Biondi; Shae D. Morgan; Joel M. Cooper; David L. Strayer
In-vehicle voice control systems are standard in most new vehicles. However, despite auditory-vocal interaction allowing drivers to keep their hands on the steering wheel and eyes on the forward roadway, recent findings indicate the potential for these systems to increase levels of workload and lead to lengthy interaction times. Although many studies have examined the distraction potential of interacting with in-vehicle voice control systems, more research is needed to understand the relationship between different system design components and workload. In this study, we investigate the role of system delay, system accuracy, and menu depth in determining the overall level of demand and interaction times on eight different 2017 model-year vehicles. Voice system accuracy was measured via playback of a pre-recorded sample of voice commands through a studio monitor mounted near the headrest. Menu depth and system delay were calculated by measuring, respectively, the number of interaction steps and total system processing time required to access common infotainment functions. These measures were validated through linear and multiple regression analyses with workload and task time collected in an on-road study. We found system delay and system accuracy to be significant predictors of task time and subjective measures of workload from the NASA Task Load Index and the Driving Activity Load Index. A In addition to providing valuable information on the role of separate voice control system design components on resulting levels of workload, these results extend past research by generalizing findings to multiple current auditory-vocal systems.
Journal of the Acoustical Society of America | 2018
Shae D. Morgan
Recent research suggests that some types of emotional speech are more intelligible than others when presented in background noise (Dupuis & Pichora-Fuller, 2015). Yet, emotional speech can often be present as the background noise itself. Attentional mechanisms and auditory stream segregation capabilities likely impact target word recognition in these situations with emotional speech maskers. The present study examined calm target sentences in masking background combinations of four emotions (calm, sad, happy, and angry) and two masker types (2-talker babble and speech-shaped noise). The emotion categories differ from one another on perceptual dimensions of activation (low to high) and pleasantness (unpleasant to pleasant). Speech-shaped noise maskers were spectrally and temporally similar to the 2-talker babble maskers. Performance was compared between the two masker type conditions to quantify the amount of informational masking induced by different emotional speech maskers. Furthermore, the number of ta...
Journal of the Acoustical Society of America | 2017
Shae D. Morgan; Rebecca Labowe
Two models of emotion identification and discrimination are not often compared and may show differing trends in how listeners rate emotional stimuli. The discrete or “basic” emotion model posits that emotions are categorical, and that listeners develop auditory prototypes of “basic” emotions (e.g., anger) based on their acoustic profile and past experiences. More complex emotions (e.g., frustration) fall under a “basic” category. The dimensional model examines emotions along continua of different emotional dimensions, such as activation/arousal and pleasantness/valence. The present study introduces the Morgan Emotional Speech Set and examines listener judgments of the stimuli in the corpus. The database consists of 2160 emotional speech sentences (90 sentences x 4 emotions x 6 talkers) produced by three male and three female actors for use in future studies. Each sentence was rated by 10 listeners (5 male, 5 female), who assigned an emotion category to each sentence and also rated each sentence by its act...
Journal of the Acoustical Society of America | 2017
Shae D. Morgan; Ashton Crain
Previous research suggests that young normal-hearing and older hearing-impaired adult listeners judge clear speech as sounding angry more often than conversational speech. Interestingly, older hearing-impaired listeners were less likely than young normal-hearing listeners to judge sentences as angry in both speaking styles. It was unknown, however, whether this difference in ratings of emotion were driven by the age or hearing status differences between the two groups. A secondary investigation showed that young adult listeners with a simulated hearing loss that matched the older hearing-impaired group rated emotions nearly identically to the young normal-hearing group, suggesting no effect of hearing loss on ratings of emotion. The simulated hearing loss failed to account for other auditory factors or psychological processes associated with aging that may have account for the group differences. The present study carried out the same emotional rating task using clear and conversational speech sentences (a...
Journal of the Acoustical Society of America | 2017
Shae D. Morgan; Sarah Hargus Ferguson; Eric J. Hunter
In adverse listening environments or when barriers to communication are present (such as hearing loss), talkers often modify their speech to facilitate communication. Such environments and demands for effective communication are often present in professions that require extensive use of the voice (e.g., teachers, call center workers, etc.). Women are known to suffer a higher incidence of voice disorders among those in these professions, possibly due to their accommodation strategies they employ when in adverse environments. The present study assessed gender differences in speech acoustic changes made in simulated environments (quiet, low-level noise, high-level noise, and reverberation) for two different speaking style instructions (clear and conversational). Ten talkers (five male, five female) performed three speech production tasks (a passage, a list of sentences, and a picture description) in each simulated environment. The two speaking styles were recorded in separate test sessions. Several acoustic ...
Journal of Speech Language and Hearing Research | 2017
Sarah Hargus Ferguson; Shae D. Morgan
Purpose The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics (e.g., gender) on perceived clarity. Relationships among clarity ratings and other speech perceptual and acoustic measures were also explored. Method Twenty-one YNH and 15 OHI listeners rated clear and conversational sentences produced by 41 talkers on a scale of 1 (lowest possible clarity) to 7 (highest possible clarity). Results While clarity ratings varied significantly among talkers, listeners rated clear speech significantly clearer than conversational speech for all but 1 talker. OHI and YNH listeners gave similar ratings for conversational speech, but ratings for clear speech were significantly higher for OHI listeners. Talker gender effects differed for YNH and OHI listeners. Ratings of clear speech varied among subgroups of talkers with different amounts of experience talking to people with hearing loss. Conclusions Perceived clarity varies widely among talkers, but nearly all produce clear speech that sounds significantly clearer than their conversational speech. Few differences were seen between OHI and YNH listeners except the effect of talker gender.
Journal of Speech Language and Hearing Research | 2017
Shae D. Morgan; Sarah Hargus Ferguson
Purpose In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style. Method The first experiment included 18 YNH listeners, and the second included 10 additional YNH listeners along with 20 OHI listeners. Participants heard sentences spoken conversationally and clearly. Participants selected the emotion they heard in the talkers voice using a 6-alternative, forced-choice paradigm. Results Clear speech was judged as sounding angry and disgusted more often and happy, fearful, sad, and neutral less often than conversational speech. Talkers whose clear speech was judged to be particularly clear were also judged as sounding angry more often and fearful less often than other talkers. OHI listeners reported hearing anger less often than YNH listeners; however, they still judged clear speech as angry more often than conversational speech. Conclusions Speech spoken clearly may sound angry more often than speech spoken conversationally. Although perceived emotion varied between YNH and OHI listeners, judgments of anger were higher for clear speech than conversational speech for both listener groups. Supplemental Materials https://doi.org/10.23641/asha.5170717.
Journal of the Acoustical Society of America | 2016
Shae D. Morgan; Skyler G. Jennings; Sarah Hargus Ferguson
Previous research suggests that both young normal-hearing and older hearing-impaired listeners judge clear speech as sounding angry more often than conversational speech. Interestingly, older hearing-impaired listeners were less likely than young normal-hearing listeners to judge sentences as angry in both speaking styles, suggesting that age and/or hearing loss may play a role in judging talkers’ emotions. An acoustic cue that helps distinguish angry speech from emotionally neutral speech is increased high-frequency energy, which may be attenuated or rendered inaudible by age-related hearing loss. The present study tests the hypothesis that simulating such a hearing loss will decrease the perception of anger by young normal-hearing listeners. Sentences spoken clearly and conversationally were processed and filtered to simulate the average hearing loss of the older hearing-impaired listeners from a previous study. Young normal-hearing listeners were asked to assign each sentence to one of six categories (...
Journal of the Acoustical Society of America | 2015
Sarah Hargus Ferguson; Shae D. Morgan
Young adults with normal hearing and older adults with hearing loss performed subjective ratings of speech clarity on sentences spoken by all 41 talkers of the Ferguson Clear Speech Database. The sentences were selected from the CID Everyday Sentence lists and were produced under instructions to speak in a conversational manner and in a clear speaking style. A different set of 14 sentences was recorded in each style. Rated clarity will be compared between the two listener groups as well as among subgroups of talkers who differ in demographic and other characteristics. Clarity data will also be analyzed in conjunction with perceptual and acoustic data obtained in other investigations to reveal the relationship between vowel intelligibility and sentence clarity as well as the acoustic features that underlie perceived sentence clarity for different listener groups.