Outi Tuomainen
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Outi Tuomainen.
Journal of Speech Language and Hearing Research | 2017
Lorna F. Halliday; Outi Tuomainen; Stuart Rosen
Purpose The goal of this study was to examine language development and factors related to language impairments in children with mild to moderate sensorineural hearing loss (MMHL). Method Ninety children, aged 8-16 years (46 children with MMHL; 44 aged-matched controls), were administered a battery of standardized language assessments, including measures of phonological processing, receptive and expressive vocabulary and grammar, word and nonword reading, and parental report of communication skills. Group differences were examined after controlling for nonverbal ability. Results Children with MMHL performed as well as controls on receptive vocabulary and word and nonword reading. They also performed within normal limits, albeit significantly worse than controls, on expressive vocabulary, and on receptive and expressive grammar, and worse than both controls and standardized norms on phonological processing and parental report of communication skills. However, there was considerable variation in performance, with 26% showing evidence of clinically significant oral or written language impairments. Poor performance was not linked to severity of hearing loss nor age of diagnosis. Rather, outcomes were related to nonverbal ability, maternal education, and presence/absence of family history of language problems. Conclusions Clinically significant language impairments are not an inevitable consequence of MMHL. Risk factors appear to include lower maternal education and family history of language problems, whereas nonverbal ability may constitute a protective factor.
Cognition | 2017
Lorna F. Halliday; Outi Tuomainen; Stuart Rosen
There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children.
International Journal of Language & Communication Disorders | 2017
Peter Howell; Kevin Tang; Outi Tuomainen; Sin Kan Chan; Kirsten Beltran; Avin Mirawdeli; John Harris
BACKGROUND Stuttering and word-finding difficulty (WFD) are two types of communication difficulty that occur frequently in children who learn English as an additional language (EAL), as well as those who only speak English. The two disorders require different, specific forms of intervention. Prior research has described the symptoms of each type of difficulty. This paper describes the development of a non-word repetition test (UNWR), applicable across languages, that was validated by comparing groups of children identified by their speech and language symptoms as having either stuttering or WFD. AIMS To evaluate whether non-word repetition scores using the UNWR test distinguished between children who stutter and those who have a WFD, irrespective of the childrens first language. METHODS & PROCEDURES UNWR was administered to ninety-six 4-5-year-old children attending UK schools (20.83% of whom had EAL). The childrens speech samples in English were assessed for symptoms of stuttering and WFD. UNWR scores were calculated. OUTCOMES & RESULTS Regression models were fitted to establish whether language group (English only/EAL) and symptoms of (1) stuttering and (2) WFD predicted UNWR scores. Stuttering symptoms predicted UNWR, whereas WFD did not. These two findings suggest that UNWR scores dissociate stuttering from WFD. There were no differences between monolingual English-speakers and children who had EAL. CONCLUSIONS & IMPLICATIONS UNWR scores distinguish between stuttering and WFD irrespective of language(s) spoken, allowing future evaluation of a range of languages in clinics or schools.
Journal of Speech Language and Hearing Research | 2016
Valerie Hazan; Outi Tuomainen; Michèle Pettinato
Purpose This study investigated the acoustic characteristics of spontaneous speech by talkers aged 9-14 years and their ability to adapt these characteristics to maintain effective communication when intelligibility was artificially degraded for their interlocutor. Method Recordings were made for 96 children (50 female participants, 46 male participants) engaged in a problem-solving task with a same-sex friend; recordings for 20 adults were used as reference. The task was carried out in good listening conditions (normal transmission) and in degraded transmission conditions. Articulation rate, median fundamental frequency (f0), f0 range, and relative energy in the 1- to 3-kHz range were analyzed. Results With increasing age, children significantly reduced their median f0 and f0 range, became faster talkers, and reduced their mid-frequency energy in spontaneous speech. Children produced similar clear speech adaptations (in degraded transmission conditions) as adults, but only children aged 11-14 years increased their f0 range, an unhelpful strategy not transmitted via the vocoder. Changes made by children were consistent with a general increase in vocal effort. Conclusion Further developments in speech production take place during later childhood. Children use clear speech strategies to benefit an interlocutor facing intelligibility problems but may not be able to attune these strategies to the same degree as adults.
Journal of the Acoustical Society of America | 2016
Outi Tuomainen; Valerie Hazan; Rachel Romeo
This study investigated whether adaptations made in clear speaking styles result in more discriminable phonetic categories than in a casual style. Multiple iterations of keywords with word-initial /s/-/ʃ/ were obtained from 40 adults in casual and clear speech via picture description. For centroids, cross-category distance increased in clear speech but with no change in within-category dispersion and no effect on discriminability. However, talkers produced fewer tokens with centroids in the ambiguous region for the /s/-/ʃ/ distinction. These results suggest that, whereas interlocutor feedback regarding communicative success may promote greater segmental adaptations, it is not necessary for some adaptation to occur.
Journal of the Acoustical Society of America | 2018
Valerie Hazan; Outi Tuomainen; Jeesun Kim; Chris Davis; Benjamin Sheffield; Douglas S. Brungart
The study investigated the speech adaptations by older adults (OA) with and without age-related hearing loss made to communicate effectively in challenging communicative conditions. Acoustic analyses were carried out on spontaneous speech produced during a problem-solving task (diapix) carried out by talker pairs in different listening conditions. There were 83 talkers of Southern British English. Fifty-seven talkers were OAs aged 65-84, 30 older adults with normal hearing (OANH), and 27 older adults with hearing loss (OAHL) [mean pure tone average (PTA) 0.250-4 kHz: 27.7 dB HL]. Twenty-six talkers were younger adults (YA) aged 18-26 with normal hearing. Participants were recorded while completing the diapix task with a conversational partner (YA of the same sex) when (a) both talkers heard normally (NORM), (b) the partner had a simulated hearing loss, and (c) both talkers heard babble noise. Irrespective of hearing status, there were age-related differences in some acoustic characteristics of YA and OA speech produced in NORM, most likely linked to physiological factors. In challenging conditions, while OANH talkers typically patterned with YA talkers, OAHL talkers made adaptations more consistent with an increase in vocal effort. The study suggests that even mild presbycusis in healthy OAs can affect the speech adaptations made to maintain effective communication.
Hearing Research | 2018
Valerie Hazan; Outi Tuomainen; Lilian Tu; Jeesun Kim; Chris Davis; Douglas S. Brungart; Benjamin Sheffield
ABSTRACT This study investigated the relation between the intelligibility of conversational and clear speech produced by older and younger adults and (a) the acoustic profile of their speech (b) communication effectiveness. Speech samples from 30 talkers from the elderLUCID corpus were used: 10 young adults (YA), 10 older adults with normal hearing (OANH) and 10 older adults with presbycusis (OAHL). Samples were extracted from recordings made while participants completed a problem‐solving cooperative task (diapix) with a conversational partner who could either hear them easily (NORM) or via a simulated hearing loss (HLS), which led talkers to naturally adopt a clear speaking style. In speech‐in‐noise listening experiments involving 21 young adult listeners, speech samples by OANH and OAHL were rated and perceived as less intelligible than those of YA talkers. HLS samples were more intelligible than NORM samples, with greater improvements in intelligibility across conditions seen for OA speech. The presence of presbycusis affected (a) the clear speech strategies adopted by OAHL talkers and (b) task effectiveness: OAHL talkers showed some adaptations consistent with an increase in vocal effort, and it took them significantly longer than the YA group to complete the diapix task. The relative energy in the 1–3 kHz frequency region of the long‐term average spectrum was the feature that best predicted: (a) the intelligibility of speech samples, and (b) task transaction time in the HLS condition. Overall, our study suggests that spontaneous speech produced by older adults is less intelligible in babble noise, probably due to less energy present in the 1–3 kHz frequency range rich in acoustic cues. Even mild presbycusis in ‘healthy aged’ adults can affect the dynamic adaptations in speech that are beneficial for effective communication. HighlightsSpontaneous speech by older adults is less intelligible than young adult speech.Presbycusis affected clear speech strategies and task effectiveness.ME1‐3 kHz measure best predicted speech intelligibility and task effectiveness.
Journal of the Acoustical Society of America | 2017
Valerie Hazan; Outi Tuomainen
This study investigates whether a “clear speech benefit” is obtained for speech produced by older (OA) talkers and younger adult (YA) controls in a clear speaking style when heard in babble noise. The speech materials were recorded while OA and YA talkers read BKB sentences to a YA partner who repeated the sentence while hearing normally (NORM) or with a simulated hearing loss (HLS). The HLS condition naturally induced clear speech adaptations. 128 BKB sentences from 4 YA and 4 OA talkers (NORM, HLS) matched on a range of metrics were used in an adaptive listening test tracking the signal-to-noise ratio corresponding to 67% intelligibility. Listeners were 71 native British English listeners: 24 YA (M = 25.2 yrs), 27 OA-NH with normal hearing (M= 71.8), 20 OA-HL with presbyacusis (M= 73.7). Speech perception in noise was hardest for OA listeners, especially from OA-HL. SNR thresholds were significantly lower for YA than for OA voices. The clear speech benefit for HLS speech was only significant for YA voic...
Journal of the Acoustical Society of America | 2016
Outi Tuomainen; Valerie Hazan
Our study investigates strategies used to clarify our speech and compensate for masking effects when communicating in challenging listening conditions in older and younger adults. A total of 50 older (OA, 65-85 years, 30 F) and 23 younger adults (YA, 18-35 years, 14 F) were recorded (in “Talker A” role) while they completed the problem-solving diapix task with a young adult (“Talker B”) in four listening conditions: with no interference (NORM), when Talker B had a simulated hearing loss (HLS), Talker B heard babble (BAB1) or both heard babble (BAB2) noise. We measured articulation rate, fundamental frequency (f0) median and range, energy in 1-3 kHz band reflecting spectral tilt (ME1-3 kHz) for Talker A. In NORM, OAs were slower speakers and had lower ME1-3 kHz and wider f0 range than YAs. Median f0 also converged for men and women in OA talkers. In adverse conditions, YAs slowed down their speech (HLS) and increased the f0 range (BAB1 and BAB2) more than OAs, and OAs raised their median f0 more than YAs (...
Journal of the Acoustical Society of America | 2015
Outi Tuomainen; Valerie Hazan
When asked to speak clearly, talkers make adaptations to various acoustic characteristics of their speech. Do these adaptations specifically enhance phonetic contrasts or just result in more global enhancements? For phonetic contrasts, increased discriminability could be achieved by increasing between-category distance, reducing within-category dispersion or both. The LUCID corpus contains 32 iterations per consonant for each of 40 adults for the /s/-/∫/ and /p/-/b/ contrasts. Iterations were obtained via picture elicitation in a sentence context in two conditions: when asked to speak casually and clearly. Friction centroids were measured for /s/-/∫/ and voice onset times for /p/-/b/. For /s/-/∫/, although there was significantly greater distance between centroids in the clear speech condition, within-category dispersion did not differ across speaking styles and there was no significant increase in overall discriminability in the clear condition. For /p/-/b/, in the clear condition, there was a significan...