Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Nittrouer is active.

Publication


Featured researches published by Susan Nittrouer.


Trends in Amplification | 2009

The effects of bilateral electric and bimodal electric--acoustic stimulation on language development.

Susan Nittrouer; Christopher Chapman

There is no doubt that cochlear implants have improved the spoken language abilities of children with hearing loss, but delays persist. Consequently, it is imperative that new treatment options be explored. This study evaluated one aspect of treatment that might be modified, that having to do with bilateral implants and bimodal stimulation. A total of 58 children with at least one implant were tested at 42 months of age on four language measures spanning a continuum from basic to generative in nature. When children were grouped by the kind of stimulation they had at 42 months (one implant, bilateral implants, or bimodal stimulation), no differences across groups were observed. This was true even when groups were constrained to only children who had at least 12 months to acclimatize to their stimulation configuration. However, when children were grouped according to whether or not they had spent any time with bimodal stimulation (either consistently since their first implant or as an interlude to receiving a second) advantages were found for children who had some bimodal experience, but those advantages were restricted to language abilities that are generative in nature. Thus, previously reported benefits of simultaneous bilateral implantation early in a childs life may not extend to generative language. In fact, children may benefit from a period of bimodal stimulation early in childhood because low-frequency speech signals provide prosody and serve as an aid in learning how to perceptually organize the signal that is received through a cochlear implant.


Journal of the Acoustical Society of America | 2008

Patterns of acquisition of native voice onset time in English-learning children

Joanna H. Lowenstein; Susan Nittrouer

Learning to speak involves both mastering the requisite articulatory gestures of ones native language and learning to coordinate those gestures according to the rules of the language. Voice onset time (VOT) acquisition illustrates this point: The child must learn to produce the necessary upper vocal tract and laryngeal gestures and to coordinate them with very precise timing. This longitudinal study examined the acquisition of English VOT by audiotaping seven children at 2 month intervals from first words (around 15 months) to the appearance of three-word sentences (around 30 months) in spontaneous speech. Words with initial stops were excerpted, and (1) the numbers of words produced with intended voiced and voiceless initial stops were counted; (2) VOT was measured; and (3) within-child standard deviations of VOT were measured. Results showed that children (1) initially avoided saying words with voiceless initial stops, (2) initially did not delay the onset of the laryngeal adduction relative to the release of closure as long as adults do for voiceless stops, and (3) were more variable in VOT for voiceless than for voiced stops. Overall these results support a model of acquisition that focuses on the mastery of gestural coordination as opposed to the acquisition of segmental contrasts.


Journal of Experimental Psychology: Human Perception and Performance | 2009

Children Discover the Spectral Skeletons in Their Native Language before the Amplitude Envelopes.

Susan Nittrouer; Joanna H. Lowenstein; Robert R. Packer

Much of speech perception research has focused on brief spectro-temporal properties in the signal, but some studies have shown that adults can recover linguistic form when those properties are absent. In this experiment, 7-year-old English-speaking children demonstrated adultlike abilities to understand speech when only sine waves (SWs) replicating the 3 lowest resonances of the vocal tract were presented, but they failed to demonstrate comparable abilities when noise bands amplitude-modulated with envelopes derived from the same signals were presented. In contrast, adults who were not native English speakers but who were competent 2nd-language learners were worse at understanding both kinds of stimuli than native English-speaking adults. Results showed that children learn to extract linguistic form from signals that preserve some spectral structure, even if degraded, before they learn to do so for signals that preserve only amplitude structure. The authors hypothesize that childrens early sensitivity to global spectral structure reflects the role that it may play in language learning.


Journal of the Acoustical Society of America | 2010

Learning to perceptually organize speech signals in native fashion.

Susan Nittrouer; Joanna H. Lowenstein

The ability to recognize speech involves sensory, perceptual, and cognitive processes. For much of the history of speech perception research, investigators have focused on the first and third of these, asking how much and what kinds of sensory information are used by normal and impaired listeners, as well as how effective amounts of that information are altered by top-down cognitive processes. This experiment focused on perceptual processes, asking what accounts for how the sensory information in the speech signal gets organized. Two types of speech signals processed to remove properties that could be considered traditional acoustic cues (amplitude envelopes and sine wave replicas) were presented to 100 listeners in five groups: native English-speaking (L1) adults, 7-, 5-, and 3-year-olds, and native Mandarin-speaking adults who were excellent second-language (L2) users of English. The L2 adults performed more poorly than L1 adults with both kinds of signals. Children performed more poorly than L1 adults but showed disproportionately better performance for the sine waves than for the amplitude envelopes compared to both groups of adults. Sentence context had similar effects across groups, so variability in recognition was attributed to differences in perceptual organization of the sensory information, presumed to arise from native language experience.


International Journal of Pediatric Otorhinolaryngology | 2013

Working memory in children with cochlear implants: problems are in storage, not processing.

Susan Nittrouer; Amanda Caldwell-Tarr; Joanna H. Lowenstein

BACKGROUNDnThere is growing consensus that hearing loss and consequent amplification likely interact with cognitive systems. A phenomenon often examined in regards to these potential interactions is working memory, modeled as consisting of one component responsible for storage of information and another component responsible for processing of that information. Signal degradation associated with cochlear implants should selectively inhibit storage without affecting processing. This study examined two hypotheses: (1) A single task can be used to measure storage and processing in working memory, with recall accuracy indexing storage and rate of recall indexing processing; (2) Storage is negatively impacted for children with CIs, but not processing.nnnMETHODnTwo experiments were conducted. Experiment 1 included adults and children, 8 and 6 years of age, with NH. Procedures tested the prediction that accuracy of recall could index storage and rate of recall could index processing. Both measures were obtained during a serial-recall task using word lists designed to manipulate storage and processing demands independently: non-rhyming nouns were the standard condition; rhyming nouns were predicted to diminish storage capacity; and non-rhyming adjectives were predicted to increase processing load. Experiment 2 included 98 8-year-olds, 48 with NH and 50 with CIs, in the same serial-recall task using the non-rhyming and rhyming nouns.nnnRESULTSnExperiment 1 showed that recall accuracy was poorest for the rhyming nouns and rate of recall was slowest for the non-rhyming adjectives, demonstrating that storage and processing can be indexed separately within a single task. In Experiment 2, children with CIs showed less accurate recall of serial order than children with NH, but rate of recall did not differ. Recall accuracy and rate of recall were not correlated in either experiment, reflecting independence of these mechanisms.nnnCONCLUSIONSnIt is possible to measure the operations of storage and processing mechanisms in working memory in a single task, and only storage is impaired for children with CIs. These findings suggest that research and clinical efforts should focus on enhancing the saliency of representation for children with CIs. Direct instruction of syntax and semantics could facilitate storage in real-world working memory tasks.


Journal of Communication Disorders | 2011

Sensitivity to structure in the speech signal by children with speech sound disorder and reading disability

Erin Phinney Johnson; Bruce F. Pennington; Joanna H. Lowenstein; Susan Nittrouer

PURPOSEnChildren with speech sound disorder (SSD) and reading disability (RD) have poor phonological awareness, a problem believed to arise largely from deficits in processing the sensory information in speech, specifically individual acoustic cues. However, such cues are details of acoustic structure. Recent theories suggest that listeners also need to be able to integrate those details to perceive linguistically relevant form. This study examined abilities of children with SSD, RD, and SSD+RD not only to process acoustic cues but also to recover linguistically relevant form from the speech signal.nnnMETHODnTen- to 11-year-olds with SSD (n=17), RD (n=16), SSD+RD (n=17), and Controls (n=16) were tested to examine their sensitivity to (1) voice onset times (VOT); (2) spectral structure in fricative-vowel syllables; and (3) vocoded sentences.nnnRESULTSnChildren in all groups performed similarly with VOT stimuli, but children with disorders showed delays on other tasks, although the specifics of their performance varied.nnnCONCLUSIONnChildren with poor phonemic awareness not only lack sensitivity to acoustic details, but are also less able to recover linguistically relevant forms. This is contrary to one of the main current theories of the relation between spoken and written language development.nnnLEARNING OUTCOMESnReaders will be able to (1) understand the role speech perception plays in phonological awareness, (2) distinguish between segmental and global structure analysis of speech perception, (3) describe differences and similarities in speech perception among children with speech sound disorder and/or reading disability, and (4) recognize the importance of broadening clinical interventions to focus on recognizing structure at all levels of speech analysis.


Journal of Experimental Child Psychology | 2011

What is the deficit in phonological processing deficits: Auditory sensitivity, masking, or category formation?

Susan Nittrouer; Samantha Shune; Joanna H. Lowenstein

Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties, mainly formant transitions, or enhanced masking of those properties. Adults and 8-year-olds with and without phonological processing deficits (PPD) participated. Children with PPD demonstrated weaker abilities than children with typical language development (TLD) in reading, sentence recall, and phonological awareness. Dependent measures were word recognition, discrimination of spectral glides, and phonetic judgments based on spectral and temporal cues. All tasks were conducted in quiet and in noise. Children with PPD showed neither poorer auditory sensitivity nor greater masking than adults and children with TLD, but they did demonstrate an unanticipated deficit in category formation for nonspeech sounds. These results suggest that these children may have an underlying deficit in perceptually organizing sensory information to form coherent categories.


Journal of Speech Language and Hearing Research | 2014

Do Adults With Cochlear Implants Rely on Different Acoustic Cues for Phoneme Perception Than Adults With Normal Hearing

Aaron C. Moberly; Joanna H. Lowenstein; Eric Tarr; Amanda Caldwell-Tarr; D. Bradley Welling; Antoine J. Shahin; Susan Nittrouer

PURPOSE Several acoustic cues specify any single phonemic contrast. Nonetheless, adult, native speakers of a language share weighting strategies, showing preferential attention to some properties over others. Cochlear implant (CI) signal processing disrupts the salience of some cues: In general, amplitude structure remains readily available, but spectral structure less so. This study asked how well speech recognition is supported if CI users shift attention to salient cues not weighted strongly by native speakers. METHOD Twenty adults with CIs participated. The /bɑ/-/wɑ/ contrast was used because spectral and amplitude structure varies in correlated fashion for this contrast. Adults with normal hearing weight the spectral cue strongly but the amplitude cue negligibly. Three measurements were made: labeling decisions, spectral and amplitude discrimination, and word recognition. RESULTS Outcomes varied across listeners: Some weighted the spectral cue strongly, some weighted the amplitude cue, and some weighted neither. Spectral discrimination predicted spectral weighting. Spectral weighting explained the most variance in word recognition. Age of onset of hearing loss predicted spectral weighting but not unique variance in word recognition. CONCLUSION The weighting strategies of listeners with normal hearing likely support speech recognition best, so efforts in implant design, fitting, and training should focus on developing those strategies.


International Journal of Audiology | 2013

Improving speech-in-noise recognition for children with hearing loss: potential effects of language abilities, binaural summation, and head shadow.

Susan Nittrouer; Amanda Caldwell-Tarr; Eric Tarr; Joanna H. Lowenstein; Caitlin Rice; Aaron C. Moberly

Abstract Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in childrens abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms.


International Journal of Pediatric Otorhinolaryngology | 2012

Measuring what matters: Effectively predicting language and literacy in children with cochlear implants

Susan Nittrouer; Amanda Caldwell; Christopher Holloman

OBJECTIVESnTo evaluate how well various language measures typically used with very young children after they receive cochlear implants predict language and literacy skills as they enter school.nnnMETHODSnSubjects were 50 children who had just completed kindergarten and were 6 or 7 years of age. All had previously participated in a longitudinal study from 12 to 48 months of age. 27 children had severe-to-profound hearing loss and wore cochlear implants, 8 had moderate hearing loss and wore hearing aids, and 15 had normal hearing. A latent variable of language/literacy skill was constructed from scores on six kinds of measures: (1) language comprehension; (2) expressive vocabulary; (3) phonological awareness; (4) literacy; (5) narrative skill; and (6) processing speed. Five kinds of language measures obtained at six-month intervals from 12 to 48 months of age were used as predictor variables in correlational analyses: (1) language comprehension; (2) expressive vocabulary; (3) syntactic structure of productive speech; (4) form and (5) function of language used in language samples.nnnRESULTSnOutcomes quantified how much variance in kindergarten language/literacy performance was explained by each predictor variable, at each earlier age of testing. Comprehension measures consistently predicted roughly 25-50 percent of the variance in kindergarten language/literacy performance, and were the only effective predictors before 24 months of age. Vocabulary and syntactic complexity were strong predictors after roughly 36 months of age. Amount of speech produced in language samples and number of answers to parental queries explained moderate amounts of variance in performance after 24 months of age. Number of manual gestures and nonspeech vocalizations produced in language samples explained little to no variance before 24 months of age, and after that were negatively correlated with kindergarten performance. The number of imitations produced in language samples at 24 months of age explained about 10 percent of variance in kindergarten performance, but was otherwise not correlated or negatively correlated with kindergarten outcomes.nnnCONCLUSIONSnBefore 24 months of age, the best predictor of later language success is language comprehension. In general, measures that index a childs cognitive processing of language are the most sensitive predictors of school-age language abilities.

Collaboration


Dive into the Susan Nittrouer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron C. Moberly

The Ohio State University Wexner Medical Center

View shared research outputs
Top Co-Authors

Avatar

Eric Tarr

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karen Chenausky

Beth Israel Deaconess Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge