Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michelle R. Molis is active.

Publication


Featured researches published by Michelle R. Molis.


Cognitive Brain Research | 1998

MEG correlates of categorical perception of a voice onset time continuum in humans

Panagiotis G. Simos; Randy L. Diehl; Joshua I. Breier; Michelle R. Molis; George Zouridakis; Andrew C. Papanicolaou

Event-related magnetic fields (ERFs) were recorded from the left hemisphere in nine normal volunteers in response to four consonant-vowel (CV) syllables varying in voice-onset time (VOT). CVs with VOT values of 0 and +20 ms were perceived as /ga/ and those with VOT values of +40 and +60 ms as /ka/. Results showed: (1) a displacement of the N1m peak equivalent current dipole toward more medial locations; and (2) an abrupt reduction in peak magnetic flux strength, as VOT values increased from +20 to +40 ms. No systematic differences were noted between the 0 and +20 ms stimuli or between the +40 and +60 ms CVs. The findings are in agreement with the results of multiunit invasive recordings in non-human primates regarding the spatial and temporal pattern of neuronal population responses in the human auditory cortex which could serve as neural cues for the perception of voicing contrasts.


Attention Perception & Psychophysics | 2002

Generalizing a neuropsychological model of visual categorization to auditory categorization of vowels

W. Todd Maddox; Michelle R. Molis; Randy L. Diehl

Twelve male listeners categorized 54 synthetic vowel stimuli that varied in second and third formant frequency on a Bark scale into the American English vowel categories /I/, /℧/, and /зι/. A neuropsychologically plausible model of categorization in the visual domain, the Striatal Pattern Classifier (SPC; Ashby & Waldron, 1999), is generalized to the auditory domain and applied separately to the data from each observer. Performance of the SPC is compared with that of the successful Normal A Posteriori Probability model (NAPP; Nearey, 1990; Nearey & Hogan, 1986) of auditory categorization. A version of the SPC that assumed piece-wise linear response region partitions provided a better account of the data than the SPC that assumed linear partitions, and was indistinguishable from a version that assumed quadratic response region partitions. A version of the NAPP model that assumed nonlinear response regions was superior to the NAPP model with linear partitions. The best fitting SPC provided a good account of each observers data but was outperformed by the best fitting NAPP model. Implications for bridging the gap between the domains of visual and auditory categorization are discussed.


Ear and Hearing | 2012

NEURAL ENCODING AND PERCEPTION OF SPEECH SIGNALS IN INFORMATIONAL MASKING

Keri O’Connell Bennett; Curtis J. Billings; Michelle R. Molis; Marjorie R. Leek

Objective: To investigate the contributions of energetic and informational masking to neural encoding and perception in noise, using oddball discrimination and sentence recognition tasks. Design: P3 auditory evoked potential, behavioral discrimination, and sentence recognition data were recorded in response to speech and tonal signals presented to nine normal-hearing adults. Stimuli were presented at a signal to noise ratio of −3 dB in four background conditions: Quiet, continuous noise, intermittent noise, and four-talker babble. Results: Responses to tonal signals were not significantly different for the three maskers. However, responses to speech signals in the four-talker babble resulted in longer P3 latencies, smaller P3 amplitudes, poorer discrimination accuracy, and longer reaction times than in any of the other conditions. Results also demonstrate significant correlations between physiological and behavioral data. As latency of the P3 increased, reaction times also increased and sentence recognition scores decreased. Conclusion: The data confirm a differential effect of masker type on the P3 and behavioral responses and present evidence of interference by an informational masker to speech understanding at the level of the cortex. Results also validate the use of the P3 as a useful measure to demonstrate physiological correlates of informational masking.


Phonetica | 1995

Effect of Fundamental Frequency on Medial [+Voice] / [–Voice] Judgments

Randy L. Diehl; Michelle R. Molis

Previous research has suggested that the direction of short-duration fundamental frequency (F0) perturbations following consonants helps to signal consonant [+voice]/[-voice] (abbreviated as [voice]) status. It has been proposed that the [voice] cue corresponds to the direction and extent of F0 perturbations relative to the overall intonation contour. A competing view, the low-frequency hypothesis, suggests that F0 participates in a more general way whereby low-frequency energy near the consonant contributes to [+voice] judgments. Listeners identified multiple stimulus series, each varying in voice onset time and ranging from /aga/ to /aka/. The series differed in overall intonation contour as well as in the direction of F0 perturbation relative to that contour. Consistent with one version of the low-frequency hypothesis, the F0 value at voicing onset, rather than the relative direction of the F0 perturbation, was the best predictor of [voice] judgments.


Journal of the Acoustical Society of America | 2007

Perception of roughness by listeners with sensorineural hearing loss

Jennifer B. Tufts; Michelle R. Molis

The perception of auditory roughness presumably results from imperfect spectral or temporal resolution. Sensorineural hearing loss, by affecting spectral resolution, may therefore alter roughness perception. In this study, normal-hearing and hearing-impaired listeners estimated the roughness of amplitude-modulated tones varying in carrier frequency, modulation rate, and modulation depth. Their judgments were expected to reflect effects of impaired spectral resolution. Instead, their judgments were similar, in most respects, to those of normally-hearing listeners, except at very slow modulation rates. Results suggest that mild-to-moderate sensorineural hearing loss increases the roughness of slowly fluctuating signals.


Journal of the Acoustical Society of America | 1998

Phonological boundaries and the spectral center of gravity

Michelle R. Molis; Randy L. Diehl; Adam Jacks

A critical limit of 3–3.5 Bark has been reported for the ‘‘spectral center of gravity’’ effect between the second and third formants (F2 and F3) and is assumed to correspond to a perceptually natural boundary [A. K. Syrdal and H. S. Gopal, J. Acoust. Soc. Am. 79, 1086–1100 (1986)]. This hypothesis was tested for both the [+/−back] and [+/−coronal] distinctions in English vowels. Subjects identified two sets of three‐formant synthetic vowels which varied orthogonally in F2 and F3 and ranged between /ᴜ/–/ɪ/ ([+/−back]) or /ɝ/–/ᴜ/ ([+/−coronal]). For the /ᴜ/–/ɪ/ distinction, a boundary shift was observed solely as a function of F2, ruling out an invariant F3–F2 boundary. For the /ɝ/–/ᴜ/ series, both F2 and F3 influenced boundary location. There was a relatively stable F3–F2 boundary for the mean identification responses in this case, but it occurred at less than the predicted 3–3.5 Bark difference. Follow‐up results will also be presented. [Work supported by NIDCD.]


Journal of the Acoustical Society of America | 1994

Cross‐language analysis of VCV coarticulation

Michelle R. Molis; Björn Lindblom; Wendy A. Castleman; René Carré

Ohman [J. Acoust. Soc. Am. 39, 151–168 (1966)] reported superposition of a consonant closure gesture on a vowel‐to‐vowel transition was sufficient to describe VCV coarticulation; however, other researchers [R. McAllister and O. Engstrand, Fonetik, 115–119 (1992)] have found a language‐dependent articulatory trough in the movement of the tongue during some VCV sequences. Such a trough would limit the possible extent of coarticulation. In this study, one male speaker each of American English, French, and Swedish produced VCV sequences. Vowels included /i/, /a/, and /u/. For each of three stop consonants (/b/, /p/, and /d/), an index of coarticulation was obtained through calculation of a locus equation. In addition, coarticulation indices were obtained from the output of an acoustic tube model that uses superposition to generate VCV sequences. Preliminary results indicated that superposition alone predicted coarticulation of unaspirated stops in all languages, but was not sufficient to explain the reduced c...


Journal of the Acoustical Society of America | 2018

Auditory stream segregation of iterated rippled noises by normal-hearing and hearing-impaired listenersa)

Daniel E. Shearer; Michelle R. Molis; Keri O. Bennett; Marjorie R. Leek

Individuals with hearing loss are thought to be less sensitive to the often subtle variations of acoustic information that support auditory stream segregation. Perceptual segregation can be influenced by differences in both the spectral and temporal characteristics of interleaved stimuli. The purpose of this study was to determine what stimulus characteristics support sequential stream segregation by normal-hearing and hearing-impaired listeners. Iterated rippled noises (IRNs) were used to assess the effects of tonality, spectral resolvability, and hearing loss on the perception of auditory streams in two pitch regions, corresponding to 250 and 1000 Hz. Overall, listeners with hearing loss were significantly less likely to segregate alternating IRNs into two auditory streams than were normally hearing listeners. Low pitched IRNs were generally less likely to segregate into two streams than were higher pitched IRNs. High-pass filtering was a strong contributor to reduced segregation for both groups. The tonality, or pitch strength, of the IRNs had a significant effect on streaming, but the effect was similar for both groups of subjects. These data demonstrate that stream segregation is influenced by many factors including pitch differences, pitch region, spectral resolution, and degree of stimulus tonality, in addition to the loss of auditory sensitivity.


Journal of the Acoustical Society of America | 2015

Effects of hearing impairment on sensitivity to dynamic spectral change

Michelle R. Molis; Nirmal Kumar Srinivasan; Frederick J. Gallun

The loss of peripheral auditory sensitivity, precise temporal processing, and frequency selectivity associated with hearing loss suggests that the results obtained for pure tone glide stimuli will not necessarily correspond to results obtained with more complex dynamic stimuli for listeners with hearing impairment. Normally hearing (NH) and hearing-impaired (HI) listeners identified changes in frequency as rising or falling both for tone glides and for spectrotemporal ripples. Tones glided linearly up or down in frequency with an extent of 1, 0.66, or 0.33 octaves centered around 500 or 1500 Hz. Ripple stimuli, presented in octave bands centered around 500 or 1500 Hz or in a broadband condition extending from 20–20,000 Hz, had a spectral density of 2 cycles/octave and temporal modulation gliding up or down at rates of 1, 4, or 16 Hz. Sensitivity to dynamic changes was assessed as percent correct direction identification and bias was characterized as the ratio of correctly-identified rising versus falling ...


Journal of the Acoustical Society of America | 2014

Establishing a clinical measure of spectral-ripple discrimination

Michelle R. Molis; Rachael C. Gilbert; Frederick J. Gallun

Spectral-ripple discrimination thresholds have been used effectively to assess frequency-resolving power in cochlear implant users. To improve potential clinical utility as a reliable and time-efficient measure of auditory bandwidth for listeners with acoustic hearing, possible confounds and limitations of the method must be addressed. This study examined frequency specificity and the possibility of edge-listening with narrowband stimuli. An adaptive 4-IFC procedure was used to determine ripple discrimination thresholds for normally hearing (NH) and hearing-impaired (HI) listeners. Stimuli were broadband (100–5000 Hz), high pass (1000–5000 Hz), or low pass (100–1000 Hz) logarithmically scaled, sinusoidally modulated Gaussian noises. In some conditions, Gaussian flanking noise was introduced to eliminate potential edge-listening cues. As expected, discrimination thresholds were significantly reduced for the HI listeners. Additionally, results indicate that both NH and HI listeners are able to use edge cues...

Collaboration


Dive into the Michelle R. Molis's collaboration.

Top Co-Authors

Avatar

Marjorie R. Leek

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Randy L. Diehl

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron R. Seitz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Eddins

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Eric Hoover

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge