Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Molly J. Henry is active.

Publication


Featured researches published by Molly J. Henry.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Frequency modulation entrains slow neural oscillations and optimizes human listening behavior

Molly J. Henry; Jonas Obleser

The human ability to continuously track dynamic environmental stimuli, in particular speech, is proposed to profit from “entrainment” of endogenous neural oscillations, which involves phase reorganization such that “optimal” phase comes into line with temporally expected critical events, resulting in improved processing. The current experiment goes beyond previous work in this domain by addressing two thus far unanswered questions. First, how general is neural entrainment to environmental rhythms: Can neural oscillations be entrained by temporal dynamics of ongoing rhythmic stimuli without abrupt onsets? Second, does neural entrainment optimize performance of the perceptual system: Does human auditory perception benefit from neural phase reorganization? In a human electroencephalography study, listeners detected short gaps distributed uniformly with respect to the phase angle of a 3-Hz frequency-modulated stimulus. Listeners’ ability to detect gaps in the frequency-modulated sound was not uniformly distributed in time, but clustered in certain preferred phases of the modulation. Moreover, the optimal stimulus phase was individually determined by the neural delta oscillation entrained by the stimulus. Finally, delta phase predicted behavior better than stimulus phase or the event-related potential after the gap. This study demonstrates behavioral benefits of phase realignment in response to frequency-modulated auditory stimuli, overall suggesting that frequency fluctuations in natural environmental input provide a pacing signal for endogenous neural oscillations, thereby influencing perceptual processing.


NeuroImage | 2011

FMRI investigation of cross-modal interactions in beat perception: audition primes vision, but not vice versa.

Jessica A. Grahn; Molly J. Henry; J. Devin McAuley

How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a “beat” (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using functional magnetic resonance imaging (fMRI) to characterize brain activity during perception of auditory and visual rhythms. In separate fMRI sessions, participants listened to auditory sequences or watched visual sequences. The order of auditory and visual sequence presentation was counterbalanced so that cross-modal order effects could be investigated. Participants judged whether sequences were speeding up or slowing down, and the pattern of tempo judgments was used to derive a measure of sensitivity to an implied beat. As expected, participants were less sensitive to an implied beat in visual sequences than in auditory sequences. However, visual sequences produced a stronger sense of beat when preceded by auditory sequences with identical temporal structure. Moreover, increases in brain activity were observed in the bilateral putamen for visual sequences preceded by auditory sequences when compared to visual sequences without prior auditory exposure. No such order-dependent differences (behavioral or neural) were found for the auditory sequences. The results provide further evidence for the role of the basal ganglia in internal generation of the beat and suggest that an internal auditory rhythm representation may be activated during visual rhythm perception.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Entrained neural oscillations in multiple frequency bands comodulate behavior

Molly J. Henry; Björn Herrmann; Jonas Obleser

Significance Our sensory environment is teeming with complex rhythmic structure, but how do environmental rhythms (such as those present in speech or music) affect our perception? In a human electroencephalography study, we investigated how auditory perception is affected when brain rhythms (neural oscillations) synchronize with the complex rhythmic structure in synthetic sounds that possess rhythmic characteristics similar to speech. We found that neural phase in multiple frequency bands synchronized with the complex stimulus rhythm and interacted to determine target-detection performance. Critically, the influence of neural oscillations on target-detection performance was present only for frequency bands synchronized with the rhythmic structure of the stimuli. Our results elucidate how multiple frequency bands shape the effective neural processing of environmental stimuli. Our sensory environment is teeming with complex rhythmic structure, to which neural oscillations can become synchronized. Neural synchronization to environmental rhythms (entrainment) is hypothesized to shape human perception, as rhythmic structure acts to temporally organize cortical excitability. In the current human electroencephalography study, we investigated how behavior is influenced by neural oscillatory dynamics when the rhythmic fluctuations in the sensory environment take on a naturalistic degree of complexity. Listeners detected near-threshold gaps in auditory stimuli that were simultaneously modulated in frequency (frequency modulation, 3.1 Hz) and amplitude (amplitude modulation, 5.075 Hz); modulation rates and types were chosen to mimic the complex rhythmic structure of natural speech. Neural oscillations were entrained by both the frequency modulation and amplitude modulation in the stimulation. Critically, listeners’ target-detection accuracy depended on the specific phase–phase relationship between entrained neural oscillations in both the 3.1-Hz and 5.075-Hz frequency bands, with the best performance occurring when the respective troughs in both neural oscillations coincided. Neural-phase effects were specific to the frequency bands entrained by the rhythmic stimulation. Moreover, the degree of behavioral comodulation by neural phase in both frequency bands exceeded the degree of behavioral modulation by either frequency band alone. Our results elucidate how fluctuating excitability, within and across multiple entrained frequency bands, shapes the effective neural processing of environmental stimuli. More generally, the frequency-specific nature of behavioral comodulation effects suggests that environmental rhythms act to reduce the complexity of high-dimensional neural states.


The Journal of Neuroscience | 2013

The Brain Dynamics of Rapid Perceptual Adaptation to Adverse Listening Conditions

Julia Erb; Molly J. Henry; Frank Eisner; Jonas Obleser

Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an “executive” network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic “language” areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory–language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.


Timing & Time Perception | 2014

Low-frequency neural oscillations support dynamic attending in temporal context

Molly J. Henry; Björn Herrmann

Behaviorally relevant environmental stimuli are often characterized by some degree of temporal regularity. Dynamic attending theory provides a framework for explaining how perception of stimulus events is affected by the temporal context within which they occur. However, the precise neural implementation of dynamic attending remains unclear. Here, we provide a suggestion for a potential neural implementation of dynamic attending by appealing to low-frequency neural oscillations. The current review will familiarize the reader with the basic theoretical tenets of dynamic attending theory, and review empirical work supporting predictions derived from the theory. The potential neural implementation of dynamic attending theory with respect to low-frequency neural oscillations will be outlined, covering stimulus processing in regular and irregular contexts. Finally, we will provide some more speculative connections between dynamic attending and neural oscillations, and suggest further avenues for future research.


Frontiers in Human Neuroscience | 2012

Neural Oscillations in Speech: Don't be Enslaved by the Envelope

Jonas Obleser; Björn Herrmann; Molly J. Henry

In a recent “Perspective” article (Giraud and Poeppel, 2012), Giraud and Poeppel lay out in admirable clarity how neural oscillations and, in particular, nested oscillations at different time scales, might enable the human brain to understand speech. They provide compelling evidence for “enslaving” of ongoing neural oscillations by slow fluctuations in the amplitude envelope of the speech signal, and propose potential mechanisms for how slow theta and faster gamma oscillatory networks might work together to enable a concerted neural coding of speech. This model is unparalleled in its fruitful incorporation of state-of-the-art computational models and neurophysiology (e.g., the intriguing pyramidal–interneuron gamma loops, PING – which will unfortunately not be observable in healthy, speech-processing humans within the near future). The authors propose a scenario focused on theta and gamma, where problems in speech comprehension are sorted out if (and only if) the brain syncs well enough to the amplitude fluctuations of the incoming signal.


Journal of Experimental Psychology: Human Perception and Performance | 2009

Evaluation of an Imputed Pitch Velocity Model of the Auditory Kappa Effect.

Molly J. Henry; J. Devin McAuley

Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600 ms) in order to establish baseline performance levels for the 3 values of T. Experiments 2 and 3 combined the values of T tested in Experiment 1 with a pitch manipulation in order to create fast (8 semitones/728 ms), medium (8 semitones/1,000 ms), and slow (8 semitones/1,600 ms) velocity conditions. Consistent with an auditory motion hypothesis, distortions in perceived timing were larger for fast than for slow velocity conditions for both ascending sequences (Experiment 2) and descending sequences (Experiment 3). Overall, results supported the proposed imputed pitch velocity model of the auditory kappa effect.


Attention Perception & Psychophysics | 2010

Modality effects in rhythm processing: Auditory encoding of visual rhythms is neither obligatory nor automatic

J. Devin McAuley; Molly J. Henry

Modality effects in rhythm processing were examined using a tempo judgment paradigm, in which participants made speeding-up or slowing-down judgments for auditory and visual sequences. A key element of stimulus construction was that the expected pattern of tempo judgments for critical test stimuli depended on a beat-based encoding of the sequence. A model-based measure of degree of beat-based encoding computed from the pattern of tempo judgments revealed greater beat sensitivity for auditory rhythms than for visual rhythms. Visual rhythms with prior auditory exposure were more likely to show a pattern of tempo judgments similar to that for auditory rhythms than were visual rhythms without prior auditory exposure, but only for a beat period of 600 msec. Slowing down the rhythms eliminated the effect of prior auditory exposure on visual rhythm processing. Taken together, the findings in this study support the view that auditory rhythms demonstrate an advantage over visual rhythms in beat-based encoding and that the auditory encoding of visual rhythms can be facilitated with prior auditory exposure, but only within a limited temporal range. The broad conclusion from this research is that “hearing visual rhythms” is neither obligatory nor automatic, as was previously claimed by Guttman, Gilroy, and Blake (2005).


Neuropsychologia | 2012

Auditory skills and brain morphology predict individual differences in adaptation to degraded speech

Julia Erb; Molly J. Henry; Frank Eisner; Jonas Obleser

Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesised that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested 18 normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centred on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased grey matter volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech.


Journal of Neurophysiology | 2013

Frequency-specific adaptation in human auditory cortex depends on the spectral variance in the acoustic stimulation

Björn Herrmann; Molly J. Henry; Jonas Obleser

In auditory cortex, activation and subsequent adaptation is strongest for regions responding best to a stimulated tone frequency and less for regions responding best to other frequencies. Previous attempts to characterize the spread of neural adaptation in humans investigated the auditory cortex N1 component of the event-related potentials. Importantly, however, more recent studies in animals show that neural response properties are not independent of the stimulation context. To link these findings in animals to human scalp potentials, we investigated whether contextual factors of the acoustic stimulation, namely, spectral variance, affect the spread of neural adaptation. Electroencephalograms were recorded while human participants listened to random tone sequences varying in spectral variance (narrow vs. wide). Spread of adaptation was investigated by modeling single-trial neural adaptation and subsequent recovery based on the spectro-temporal stimulation history. Frequency-specific neural responses were largest on the N1 component, and the modeled neural adaptation indices were strongly predictive of trial-by-trial amplitude variations. Yet the spread of adaption varied depending on the spectral variance in the stimulation, such that adaptation spread was broadened for tone sequences with wide spectral variance. Thus the present findings reveal context-dependent auditory cortex adaptation and point toward a flexibly adjusting auditory system that changes its response properties with the spectral requirements of the acoustic environment.

Collaboration


Dive into the Molly J. Henry's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Björn Herrmann

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge