Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Winn is active.

Publication


Featured researches published by Matthew Winn.


Ear and Hearing | 2015

The Impact of Auditory Spectral Resolution on Listening Effort Revealed by Pupil Dilation

Matthew Winn; Jan Edwards; Ruth Y. Litovsky

Objectives: This study measured the impact of auditory spectral resolution on listening effort. Systematic degradation in spectral resolution was hypothesized to elicit corresponding systematic increases in pupil dilation, consistent with the notion of pupil dilation as a marker of cognitive load. Design: Spectral resolution of sentences was varied with two different vocoders: (1) a noise-channel vocoder with a variable number of spectral channels; and (2) a vocoder designed to simulate front-end processing of a cochlear implant, including peak-picking channel selection with variable synthesis filter slopes to simulate spread of neural excitation. Pupil dilation was measured after subject-specific luminance adjustment and trial-specific baseline measures. Mixed-effects growth curve analysis was used to model pupillary responses over time. Results: For both types of vocoder, pupil dilation grew with each successive degradation in spectral resolution. Within each condition, pupillary responses were not related to intelligibility scores, and the effect of spectral resolution on pupil dilation persisted even when only analyzing trials in which responses were 100% correct. Conclusions: Intelligibility scores alone were not sufficient to quantify the effort required to understand speech with poor resolution. Degraded spectral resolution results in increased effort required to understand speech, even when intelligibility is at 100%. Pupillary responses were a sensitive and highly granular measurement to reveal changes in listening effort. Pupillary responses might potentially reveal the benefits of aural prostheses that are not captured by speech intelligibility performance alone as well as the disadvantages that are overcome by increased listening effort.


Journal of the Acoustical Society of America | 2015

Using speech sounds to test functional spectral resolution in listeners with cochlear implants

Matthew Winn; Ruth Y. Litovsky

In this study, spectral properties of speech sounds were used to test functional spectral resolution in people who use cochlear implants (CIs). Specifically, perception of the /ba/-/da/ contrast was tested using two spectral cues: Formant transitions (a fine-resolution cue) and spectral tilt (a coarse-resolution cue). Higher weighting of the formant cues was used as an index of better spectral cue perception. Participants included 19 CI listeners and 10 listeners with normal hearing (NH), for whom spectral resolution was explicitly controlled using a noise vocoder with variable carrier filter widths to simulate electrical current spread. Perceptual weighting of the two cues was modeled with mixed-effects logistic regression, and was found to systematically vary with spectral resolution. The use of formant cues was greatest for NH listeners for unprocessed speech, and declined in the two vocoded conditions. Compared to NH listeners, CI listeners relied less on formant transitions, and more on spectral tilt. Cue-weighting results showed moderately good correspondence with word recognition scores. The current approach to testing functional spectral resolution uses auditory cues that are known to be important for speech categorization, and can thus potentially serve as the basis upon which CI processing strategies and innovations are tested.


Trends in hearing | 2016

Rapid Release From Listening Effort Resulting From Semantic Context, and Effects of Spectral Degradation and Cochlear Implants

Matthew Winn

People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability.


Journal of the Acoustical Society of America | 2015

Predicting contrast effects following reliable spectral properties in speech perceptiona)

Christian E. Stilp; Paul W. Anderson; Matthew Winn

Vowel perception is influenced by precursor sounds that are resynthesized to shift frequency regions [Ladefoged and Broadbent (1957). J. Acoust. Soc. Am. 29(1), 98-104] or filtered to emphasize narrow [Kiefte and Kluender (2008). J. Acoust. Soc. Am. 123(1), 366-376] or broad frequency regions [Watkins (1991). J. Acoust. Soc. Am. 90(6), 2942-2955]. Spectral differences between filtered precursors and vowel targets are perceptually enhanced, producing spectral contrast effects (e.g., emphasizing spectral properties of /ɪ/ in the precursor elicited more /ɛ/ responses to an /ɪ/-/ɛ/ vowel continuum, and vice versa). Historically, precursors have been processed by high-gain filters, resulting in prominent stable long-term spectral properties. Perceptual sensitivity to subtler but equally reliable spectral properties is unknown. Here, precursor sentences were processed by filters of variable bandwidths and different gains, then followed by vowel sounds varying from /ɪ/-/ɛ/. Contrast effects were widely observed, including when filters had only 100-Hz bandwidth or +5 dB gain. Average filter power was a good predictor of the magnitudes of contrast effects, revealing a close linear correspondence between the prominence of a reliable spectral property and the size of shifts in perceptual responses. High sensitivity to subtle spectral regularities suggests contrast effects are not limited to high-power filters, and thus may be more pervasive in speech perception than previously thought.


Ear and Hearing | 2016

Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

Matthew Winn; Jong Ho Won; Il Joon Moon

Objectives: This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design: Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/–/da/ contrast) and a timing cue-based task (targeting the /b/–/p/ and /d/–/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Results: Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects’ categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions: When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.


Ear and Hearing | 2017

The Acoustics of Word-Initial Fricatives and Their Effect on Word-Level Intelligibility in Children With Bilateral Cochlear Implants.

Reidy Pf; Kristensen K; Matthew Winn; Ruth Y. Litovsky; Edwards

Objectives: Previous research has found that relative to their peers with normal hearing (NH), children with cochlear implants (CIs) produce the sibilant fricatives /s/ and /∫/ less accurately and with less subphonemic acoustic contrast. The present study sought to further investigate these differences across groups in two ways. First, subphonemic acoustic properties were investigated in terms of dynamic acoustic features that indexed more than just the contrast between /s/ and /∫/. Second, the authors investigated whether such differences in subphonemic acoustic contrast between sibilant fricatives affected the intelligibility of sibilant-initial single word productions by children with CIs and their peers with NH. Design: In experiment 1, productions of /s/ and /∫/ in word-initial prevocalic contexts were elicited from 22 children with bilateral CIs (aged 4 to 7 years) who had at least 2 years of CI experience and from 22 chronological age-matched peers with NH. Acoustic features were measured from 17 points across the fricatives: peak frequency was measured to index the place of articulation contrast; spectral variance and amplitude drop were measured to index the degree of sibilance. These acoustic trajectories were fitted with growth-curve models to analyze time-varying spectral change. In experiment 2, phonemically accurate word productions that were elicited in experiment 1 were embedded within four-talker babble and played to 80 adult listeners with NH. Listeners were asked to repeat the words, and their accuracy rate was used as a measure of the intelligibility of the word productions. Regression analyses were run to test which acoustic properties measured in experiment 1 predicted the intelligibility scores from experiment 2. Results: The peak frequency trajectories indicated that the children with CIs produced less acoustic contrast between /s/ and /∫/. Group differences were observed in terms of the dynamic aspects (i.e., the trajectory shapes) of the acoustic properties. In the productions by children with CIs, the peak frequency and the amplitude drop trajectories were shallower, and the spectral variance trajectories were more asymmetric, exhibiting greater increases in variance (i.e., reduced sibilance) near the fricative–vowel boundary. The listeners’ responses to the word productions indicated that when produced by children with CIs, /∫/-initial words were significantly more intelligible than /s/-initial words. However, when produced by children with NH, /s/-initial words and /∫/-initial words were equally intelligible. Intelligibility was partially predicted from the acoustic properties (Cox & Snell pseudo-R2 > 0.190), and the significant predictors were predominantly dynamic, rather than static, ones. Conclusions: Productions from children with CIs differed from those produced by age-matched NH controls in terms of their subphonemic acoustic properties. The intelligibility of sibilant-initial single-word productions by children with CIs is sensitive to the place of articulation of the initial consonant (/∫/-initial words were more intelligible than /s/-initial words), but productions by children with NH were equally intelligible across both places of articulation. Therefore, children with CIs still exhibit differential production abilities for sibilant fricatives at an age when their NH peers do not.


Journal of Experimental Psychology: Human Perception and Performance | 2017

Evaluating the sources and functions of gradiency in phoneme categorization: An individual differences approach.

Efthymia C. Kapnoula; Matthew Winn; Eun Jong Kong; Jan Edwards; Bob McMurray

During spoken language comprehension listeners transform continuous acoustic cues into categories (e.g., /b/ and /p/). While long-standing research suggests that phonetic categories are activated in a gradient way, there are also clear individual differences in that more gradient categorization has been linked to various communication impairments such as dyslexia and specific language impairments (Joanisse, Manis, Keating, & Seidenberg, 2000; López-Zamora, Luque, Álvarez, & Cobos, 2012; Serniclaes, Van Heghe, Mousty, Carré, & Sprenger-Charolles, 2004; Werker & Tees, 1987). Crucially, most studies have used 2-alternative forced choice (2AFC) tasks to measure the sharpness of between-category boundaries. Here we propose an alternative paradigm that allows us to measure categorization gradiency in a more direct way. Furthermore, we follow an individual differences approach to (a) link this measure of gradiency to multiple cue integration, (b) explore its relationship to a set of other cognitive processes, and (c) evaluate its role in individuals’ ability to perceive speech in noise. Our results provide validation for this new method of assessing phoneme categorization gradiency and offer preliminary insights into how different aspects of speech perception may be linked to each other and to more general cognitive processes.


Journal of the Acoustical Society of America | 2016

Binaural hearing in children using Gaussian enveloped and transposed tones

Erica Ehlers; Alan Kan; Matthew Winn; Corey Stoelb; Ruth Y. Litovsky

Children who use bilateral cochlear implants (BiCIs) show significantly poorer sound localization skills than their normal hearing (NH) peers. This difference has been attributed, in part, to the fact that cochlear implants (CIs) do not faithfully transmit interaural time differences (ITDs) and interaural level differences (ILDs), which are known to be important cues for sound localization. Interestingly, little is known about binaural sensitivity in NH children, in particular, with stimuli that constrain acoustic cues in a manner representative of CI processing. In order to better understand and evaluate binaural hearing in children with BiCIs, the authors first undertook a study on binaural sensitivity in NH children ages 8-10, and in adults. Experiments evaluated sound discrimination and lateralization using ITD and ILD cues, for stimuli with robust envelope cues, but poor representation of temporal fine structure. Stimuli were spondaic words, Gaussian-enveloped tone pulse trains (100 pulse-per-second), and transposed tones. Results showed that discrimination thresholds in children were adult-like (15-389 μs for ITDs and 0.5-6.0 dB for ILDs). However, lateralization based on the same binaural cues showed higher variability than seen in adults. Results are discussed in the context of factors that may be responsible for poor representation of binaural cues in bilaterally implanted children.


Journal of the Acoustical Society of America | 2014

Measurement of spectral resolution and listening effort in people with cochlear implants

Matthew Winn; Ruth Y. Litovsky; Jan Edwards

Cochlear implants (CIs) provide notably poor spectral resolution, which poses significant challenges for speech understanding, and places greater demands on listening effort. We evaluated a CI stimulation strategy designed to improve spectral resolution by measuring its impact on listening effort (as quantified by pupil dilation, which is considered to be a reliable index of cognitive load). Specifically, we investigated dichotic interleaved processing channels (where odd channels are active in one ear, and even channels are active in the contralateral ear). We used a sentence listening and repetition task where listeners alternated between their everyday clinical CI configurations and the interleaved channel strategy, to test which offered better resolution and demanded less effort. Methods and analyses stemmed from previous experiments confirming that spectral resolution has a systematic impact on listening effort in individuals with normal hearing. Pupil dilation measures were generally consistent with...


Journal of the Acoustical Society of America | 2013

The impact of spectral resolution on listening effort revealed by pupil dilation

Matthew Winn; Jan Edwards

Poor spectral resolution is a consequence of cochlear hearing loss and remains arguably the primarily limiting factor in success with a cochlear implant. In addition to showing reduced success on word recognition compared to their normal-hearing peers, listeners with hearing impairment also are reported to exert greater effort in everyday listening, leading to difficulties at the workplace and in social settings. Pupil dilation is an index of cognitive effort in various tasks, including speech perception. In this study, spectral resolution was explicitly controlled for in listeners with normal hearing using a noise vocoder with variable number of processing channels. Pupil dilation during a sentence listening and repetition task revealed a systematic relationship between spectral resolution and listening effort; as resolution grew poorer, effort increased. Significant changes in listening effort belie the notion of “ceiling” performance in degraded conditions; listeners are able to achieve success in the ...

Collaboration


Dive into the Matthew Winn's collaboration.

Top Co-Authors

Avatar

Ruth Y. Litovsky

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Jan Edwards

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Alan Kan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Wright

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge