Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Reyna L. Gordon is active.

Publication


Featured researches published by Reyna L. Gordon.


Annals of the New York Academy of Sciences | 2005

Musical and Linguistic Processing in Song Perception

Daniele Schön; Reyna L. Gordon; Mireille Besson

Abstract: One approach to comparing the neural bases of language and music is through the use of song, which is a unique and ecological combination of these two cognitive domains. In song, language and music are merged into one acoustic signal with two salient dimensions. By manipulating either the linguistic or musical dimensions (or both) of song and studying their relationships, it is possible to gain important information about the neural networks underlying language and music cognition. We will present a brief review followed by recent behavioral, electrophysiological, and neuroimaging studies concerned with the functional and structural relationships of music and language. These results, together with the previous studies in the field, help understanding whether the different levels of music and language processing are independent or interactive.


Developmental Science | 2015

Musical rhythm discrimination explains individual differences in grammar skills in children

Reyna L. Gordon; Carolyn M. Shivers; Elizabeth A. Wieland; Sonja A. Kotz; Paul J. Yoder; J. Devin McAuley

This study considered a relation between rhythm perception skills and individual differences in phonological awareness and grammar abilities, which are two language skills crucial for academic achievement. Twenty-five typically developing 6-year-old children were given standardized assessments of rhythm perception, phonological awareness, morpho-syntactic competence, and non-verbal cognitive ability. Rhythm perception accounted for 48% of the variance in morpho-syntactic competence after controlling for non-verbal IQ, socioeconomic status, and prior musical activities. Children with higher phonological awareness scores were better able to discriminate complex rhythms than children with lower scores, but not after controlling for IQ. This study is the first to show a relation between rhythm perception skills and morpho-syntactic production in children with typical language development. These findings extend the literature showing substantial overlap of neurocognitive resources for processing music and language. A video abstract of this article can be viewed at: http://youtu.be/_lO692qHDNg.


Frontiers in Psychology | 2011

EEG Correlates of Song Prosody: A New Look at the Relationship between Linguistic and Musical Rhythm.

Reyna L. Gordon; Cyrille Magne; Edward W. Large

Song composers incorporate linguistic prosody into their music when setting words to melody, a process called “textsetting.” Composers tend to align the expected stress of the lyrics with strong metrical positions in the music. The present study was designed to explore the idea that temporal alignment helps listeners to better understand song lyrics by directing listeners’ attention to instances where strong syllables occur on strong beats. Three types of textsettings were created by aligning metronome clicks with all, some or none of the strong syllables in sung sentences. Electroencephalographic recordings were taken while participants listened to the sung sentences (primes) and performed a lexical decision task on subsequent words and pseudowords (targets, presented visually). Comparison of misaligned and well-aligned sentences showed that temporal alignment between strong/weak syllables and strong/weak musical beats were associated with modulations of induced beta and evoked gamma power, which have been shown to fluctuate with rhythmic expectancies. Furthermore, targets that followed well-aligned primes elicited greater induced alpha and beta activity, and better lexical decision task performance, compared with targets that followed misaligned and varied sentences. Overall, these findings suggest that alignment of linguistic stress and musical meter in song enhances musical beat tracking and comprehension of lyrics by synchronizing neural activity with strong syllables. This approach may begin to explain the mechanisms underlying the relationship between linguistic and musical rhythm in songs, and how rhythmic attending facilitates learning and recall of song lyrics. Moreover, the observations reported here coincide with a growing number of studies reporting interactions between the linguistic and musical dimensions of song, which likely stem from shared neural resources for processing music and speech.


PLOS ONE | 2010

Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence

Reyna L. Gordon; Daniele Schön; Cyrille Magne; Corine Astésano; Mireille Besson

Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.


Frontiers in Psychology | 2015

Does Music Training Enhance Literacy Skills? A Meta-Analysis.

Reyna L. Gordon; Hilda M. Fehd; Bruce D. McCandliss

Childrens engagement in music practice is associated with enhancements in literacy-related language skills, as demonstrated by multiple reports of correlation across these two domains. Training studies have tested whether engaging in music training directly transfers benefit to childrens literacy skill development. Results of such studies, however, are mixed. Interpretation of these mixed results is made more complex by the fact that a wide range of literacy-related outcome measures are used across these studies. Here, we address these challenges via a meta-analytic approach. A comprehensive literature review of peer-reviewed music training studies was built around key criteria needed to test the direct transfer hypothesis, including: (a) inclusion of music training vs. control groups; (b) inclusion of pre- vs. post-comparison measures, and (c) indication that reading instruction was held constant across groups. Thirteen studies were identified (n = 901). Two classes of outcome measures emerged with sufficient overlap to support meta-analysis: phonological awareness and reading fluency. Hours of training, age, and type of control intervention were examined as potential moderators. Results supported the hypothesis that music training leads to gains in phonological awareness skills. The effect isolated by contrasting gains in music training vs. gains in control was small relative to the large variance in these skills (d = 0.2). Interestingly, analyses revealed that transfer effects for rhyming skills tended to grow stronger with increased hours of training. In contrast, no significant aggregate transfer effect emerged for reading fluency measures, despite some studies reporting large training effects. The potential influence of other study design factors were considered, including intervention design, IQ, and SES. Results are discussed in the context of emerging findings that music training may enhance literacy development via changes in brain mechanisms that support both music and language cognition.


Social Cognitive and Affective Neuroscience | 2014

Neural correlates of cross-modal affective priming by music in Williams syndrome

Miriam D. Lense; Reyna L. Gordon; Alexandra P. F. Key; Elisabeth M. Dykens

Emotional connection is the main reason people engage with music, and the emotional features of music can influence processing in other domains. Williams syndrome (WS) is a neurodevelopmental genetic disorder where musicality and sociability are prominent aspects of the phenotype. This study examined oscillatory brain activity during a musical affective priming paradigm. Participants with WS and age-matched typically developing controls heard brief emotional musical excerpts or emotionally neutral sounds and then reported the emotional valence (happy/sad) of subsequently presented faces. Participants with WS demonstrated greater evoked fronto-central alpha activity to the happy vs sad musical excerpts. The size of these alpha effects correlated with parent-reported emotional reactivity to music. Although participant groups did not differ in accuracy of identifying facial emotions, reaction time data revealed a music priming effect only in persons with WS, who responded faster when the face matched the emotional valence of the preceding musical excerpt vs when the valence differed. Matching emotional valence was also associated with greater evoked gamma activity thought to reflect cross-modal integration. This effect was not present in controls. The results suggest a specific connection between music and socioemotional processing and have implications for clinical and educational approaches for WS.


Brain and Language | 2016

Speech rhythm sensitivity and musical aptitude: ERPs and individual differences

Cyrille Magne; Deanna K. Jordan; Reyna L. Gordon

This study investigated the electrophysiological markers of rhythmic expectancy during speech perception. In addition, given the large literature showing overlaps between cognitive and neural resources recruited for language and music, we considered a relation between musical aptitude and individual differences in speech rhythm sensitivity. Twenty adults were administered a standardized assessment of musical aptitude, and EEG was recorded as participants listened to sequences of four bisyllabic words for which the stress pattern of the final word either matched or mismatched the stress pattern of the preceding words. Words with unexpected stress patterns elicited an increased fronto-central mid-latency negativity. In addition, rhythm aptitude significantly correlated with the size of the negative effect elicited by unexpected iambic words, the least common type of stress pattern in English. The present results suggest shared neurocognitive resources for speech rhythm and musical rhythm.


Annals of the New York Academy of Sciences | 2015

Perspectives on the rhythm–grammar link and its implications for typical and atypical language development

Reyna L. Gordon; Magdalene S. Jacobs; C. Melanie Schuele; J. Devin McAuley

This paper reviews the mounting evidence for shared cognitive mechanisms and neural resources for rhythm and grammar. Evidence for a role of rhythm skills in language development and language comprehension is reviewed here in three lines of research: (1) behavioral and brain data from adults and children, showing that prosody and other aspects of timing of sentences influence online morpho‐syntactic processing; (2) comorbidity of impaired rhythm with grammatical deficits in children with language impairment; and (3) our recent work showing a strong positive association between rhythm perception skills and expressive grammatical skills in young school‐age children with typical development. Our preliminary follow‐up study presented here revealed that musical rhythm perception predicted variance in 6‐year‐old childrens production of complex syntax, as well as online reorganization of grammatical information (transformation); these data provide an additional perspective on the hierarchical relations potentially shared by rhythm and grammar. A theoretical framework for shared cognitive resources for the role of rhythm in perceiving and learning grammatical structure is elaborated on in light of potential implications for using rhythm‐emphasized musical training to improve language skills in children.


Journal of Child Neurology | 2015

Induced Gamma Oscillations Differentiate Familiar and Novel Voices in Children With MECP2 Duplication and Rett Syndromes

Sarika U. Peters; Reyna L. Gordon; Alexandra P. Key

Normal levels of the methyl CpG–binding protein 2 (MeCP2) are critical to neurologic functioning, and slight alterations result in intellectual disability and autistic features. It was hypothesized that children with MECP2 duplication (overexpression of MeCP2) and Rett syndrome (underexpression of MeCP2) would exhibit distinct electroencephalographic (EEG) indices of auditory stimulus discrimination. In this study, gamma-band oscillatory responses to familiar and novel voices were examined and related to social functioning in 17 children (3-11 years old) with MECP2 duplication (n = 12) and Rett syndrome (n = 5). Relative to the stranger’s voice, gamma activity in response to the mother’s voice was increased in MECP2 duplication but decreased in Rett syndrome. In MECP2 duplication, greater mother versus stranger differences in gamma activity were associated with higher social functioning. For the first time, brain responses in a passive voice discrimination paradigm show that overexpression and underexpression of MeCP2 have differential effects on cortical information processing.


Annals of the New York Academy of Sciences | 2018

The case for treatment fidelity in active music interventions: why and how

Natalie Wiens; Reyna L. Gordon

As the volume of studies testing the benefits of active music‐making interventions increases exponentially, it is important to document what exactly is happening during music treatment sessions in order to provide evidence for the mechanisms through which music training affects other domains. Thus, to complement systematic and rigorous attention to outcomes of the treatment, we outline four vital components of treatment fidelity and discuss their implementation in nonmusic‐ and music‐based interventions. We then describe the design of Music Impacting Language Expertise (MILEStone), a new intervention that aims to improve grammar skills in children with specific language impairment by increasing sensitivity to rhythmic structure, which may enhance general temporal processing and sensitivity to syntactic structure. We describe the approach to addressing treatment fidelity in MILEStone adapted from intervention research from other fields, including a behavioral coding system to track instructional episodes and child participation, a treatment manual, activity checklists, provider training and monitoring, a home practice log, and teacher ratings of participant engagement. This approach takes an important first step in modeling a formalized procedure for assessing treatment fidelity in active music‐making intervention research, as a means of increasing methodological rigor in support of evidence‐based practice in clinical and educational settings.

Collaboration


Dive into the Reyna L. Gordon's collaboration.

Top Co-Authors

Avatar

Cyrille Magne

Middle Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Daniele Schön

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elisabeth M. Dykens

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandra P. Key

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge