Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruno Gingras is active.

Publication


Featured researches published by Bruno Gingras.


PLOS ONE | 2014

The musicality of non-musicians: an index for assessing musical sophistication in the general population.

Daniel Müllensiefen; Bruno Gingras; Jason Musil; Lauren Stewart

Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of ‘musical sophistication’ which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.


Emotion | 2012

Crossmodal transfer of arousal, but not pleasantness, from the musical to the visual domain.

Manuela M. Marin; Bruno Gingras; Joydeep Bhattacharya

Arousal and valence (pleasantness) are considered primary dimensions of emotion. However, the degree to which these dimensions interact in emotional processing across sensory modalities is poorly understood. We addressed this issue by applying a crossmodal priming paradigm in which auditory primes (Romantic piano solo music) varying in arousal and/or pleasantness were sequentially paired with visual targets (IAPS pictures). In Experiment 1, the emotion spaces of 120 primes and 120 targets were explored separately in addition to the effects of musical training and gender. Thirty-two participants rated their felt pleasantness and arousal in response to primes and targets on equivalent rating scales as well as their familiarity with the stimuli. Musical training was associated with elevated familiarity ratings for high-arousing music and a trend for elevated arousal ratings, especially in response to unpleasant musical stimuli. Males reported higher arousal than females for pleasant visual stimuli. In Experiment 2, 40 nonmusicians rated their felt arousal and pleasantness in response to 20 visual targets after listening to 80 musical primes. Arousal associated with the musical primes modulated felt arousal in response to visual targets, yet no such transfer of pleasantness was observed between the two modalities. Experiment 3 sought to rule out the possibility of any order effect of the subjective ratings, and responses of 14 nonmusicians replicated results of Experiment 2. This study demonstrates the effectiveness of the crossmodal priming paradigm in basic research on musical emotions.


Neuropsychologia | 2012

Perception of musical timbre in congenital amusia: Categorization, discrimination and short-term memory

Manuela M. Marin; Bruno Gingras; Lauren Stewart

Congenital amusia is a neurodevelopmental disorder that is characterized primarily by difficulties in the pitch domain. The aim of the present study was to investigate the perception of musical timbre in a group of individuals with congenital amusia by probing discrimination and short-term memory for real-world timbral stimuli as well as examining the ability of these individuals to sort instrumental tones according to their timbral similarity. Thirteen amusic individuals were matched with thirteen non-amusic controls on a range of background variables. The discrimination task included stimuli of two different durations and pairings of instrumental tones that reflected varying distances in a perceptual timbre space. Performance in the discrimination task was at ceiling for both groups. In contrast, amusic individuals scored lower than controls on the short-term timbral memory task. Amusic individuals also performed worse than controls on the sorting task, suggesting differences in the higher-order representation of musical timbre. These findings add to the emerging picture of amusia as a disorder that has consequences for the perception and memory of musical timbre, as well as pitch.


Journal of the Acoustical Society of America | 2013

A three-parameter model for classifying anurans into four genera based on advertisement calls

Bruno Gingras; W. T. Fitch

The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses.


Philosophical Transactions of the Royal Society B | 2015

Defining the biological bases of individual differences in musicality

Bruno Gingras; Henkjan Honing; Isabelle Peretz; Laurel J. Trainor; Simon E. Fisher

Advances in molecular technologies make it possible to pinpoint genomic factors associated with complex human traits. For cognition and behaviour, identification of underlying genes provides new entry points for deciphering the key neurobiological pathways. In the past decade, the search for genetic correlates of musicality has gained traction. Reports have documented familial clustering for different extremes of ability, including amusia and absolute pitch (AP), with twin studies demonstrating high heritability for some music-related skills, such as pitch perception. Certain chromosomal regions have been linked to AP and musical aptitude, while individual candidate genes have been investigated in relation to aptitude and creativity. Most recently, researchers in this field started performing genome-wide association scans. Thus far, studies have been hampered by relatively small sample sizes and limitations in defining components of musicality, including an emphasis on skills that can only be assessed in trained musicians. With opportunities to administer standardized aptitude tests online, systematic large-scale assessment of musical abilities is now feasible, an important step towards high-powered genome-wide screens. Here, we offer a synthesis of existing literatures and outline concrete suggestions for the development of comprehensive operational tools for the analysis of musical phenotypes.


Sensors | 2013

Primate Drum Kit: A System for Studying Acoustic Pattern Production by Non-Human Primates Using Acceleration and Strain Sensors

Andrea Ravignani; Vicente Matellán Olivera; Bruno Gingras; Riccardo Hofer; Carlos Rodrí guez Hernández; Ruth-Sophie Sonnweber; W. Tecumseh Fitch

The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.


BMC Evolutionary Biology | 2013

Phylogenetic signal in the acoustic parameters of the advertisement calls of four clades of anurans

Bruno Gingras; Elmira Mohandesan; Drasko Boko; W. Tecumseh Fitch

BackgroundAnuran vocalizations, especially their advertisement calls, are largely species-specific and can be used to identify taxonomic affiliations. Because anurans are not vocal learners, their vocalizations are generally assumed to have a strong genetic component. This suggests that the degree of similarity between advertisement calls may be related to large-scale phylogenetic relationships. To test this hypothesis, advertisement calls from 90 species belonging to four large clades (Bufo, Hylinae, Leptodactylus, and Rana) were analyzed. Phylogenetic distances were estimated based on the DNA sequences of the 12S mitochondrial ribosomal RNA gene, and, for a subset of 49 species, on the rhodopsin gene. Mean values for five acoustic parameters (coefficient of variation of root-mean-square amplitude, dominant frequency, spectral flux, spectral irregularity, and spectral flatness) were computed for each species. We then tested for phylogenetic signal on the body-size-corrected residuals of these five parameters, using three statistical tests (Moran’s I, Mantel, and Blomberg’s K) and three models of genetic distance (pairwise distances, Abouheif’s proximities, and the variance-covariance matrix derived from the phylogenetic tree).ResultsA significant phylogenetic signal was detected for most acoustic parameters on the 12S dataset, across statistical tests and genetic distance models, both for the entire sample of 90 species and within clades in several cases. A further analysis on a subset of 49 species using genetic distances derived from rhodopsin and from 12S broadly confirmed the results obtained on the larger sample, indicating that the phylogenetic signals observed in these acoustic parameters can be detected using a variety of genetic distance models derived either from a variable mitochondrial sequence or from a conserved nuclear gene.ConclusionsWe found a robust relationship, in a large number of species, between anuran phylogenetic relatedness and acoustic similarity in the advertisement calls in a taxon with no evidence for vocal learning, even after correcting for the effect of body size. This finding, covering a broad sample of species whose vocalizations are fairly diverse, indicates that the intense selection on certain call characteristics observed in many anurans does not eliminate all acoustic indicators of relatedness. Our approach could potentially be applied to other vocal taxa.


Quarterly Journal of Experimental Psychology | 2014

Beyond intensity: Spectral features effectively predict music-induced subjective arousal

Bruno Gingras; Manuela M. Marin; W. Tecumseh Fitch

Emotions in music are conveyed by a variety of acoustic cues. Notably, the positive association between sound intensity and arousal has particular biological relevance. However, although amplitude normalization is a common procedure used to control for intensity in music psychology research, direct comparisons between emotional ratings of original and amplitude-normalized musical excerpts are lacking. In this study, 30 nonmusicians retrospectively rated the subjective arousal and pleasantness induced by 84 six-second classical music excerpts, and an additional 30 nonmusicians rated the same excerpts normalized for amplitude. Following the cue-redundancy and Brunswik lens models of acoustic communication, we hypothesized that arousal and pleasantness ratings would be similar for both versions of the excerpts, and that arousal could be predicted effectively by other acoustic cues besides intensity. Although the difference in mean arousal and pleasantness ratings between original and amplitude-normalized excerpts correlated significantly with the amplitude adjustment, ratings for both sets of excerpts were highly correlated and shared a similar range of values, thus validating the use of amplitude normalization in music emotion research. Two acoustic parameters, spectral flux and spectral entropy, accounted for 65% of the variance in arousal ratings for both sets, indicating that spectral features can effectively predict arousal. Additionally, we confirmed that amplitude-normalized excerpts were adequately matched for loudness. Overall, the results corroborate our hypotheses and support the cue-redundancy and Brunswik lens models.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Overtone-based pitch selection in hermit thrush song: Unexpected convergence with scale construction in human music

Emily L. Doolittle; Bruno Gingras; Dominik Endres; W. Tecumseh Fitch

Significance The song of the hermit thrush, a common North American songbird, is renowned for its apparent musicality and has attracted the attention of musicians and ornithologists for more than a century. Here we show that hermit thrush songs, like much human music, use pitches that are mathematically related by simple integer ratios and follow the harmonic series. Our findings add to a small but growing body of research showing that a preference for small-integer ratio intervals is not unique to humans and are thus particularly relevant to the ongoing nature/nurture debate about whether musical predispositions such as the preference for consonant intervals are biologically or culturally driven. Many human musical scales, including the diatonic major scale prevalent in Western music, are built partially or entirely from intervals (ratios between adjacent frequencies) corresponding to small-integer proportions drawn from the harmonic series. Scientists have long debated the extent to which principles of scale generation in human music are biologically or culturally determined. Data from animal “song” may provide new insights into this discussion. Here, by examining pitch relationships using both a simple linear regression model and a Bayesian generative model, we show that most songs of the hermit thrush (Catharus guttatus) favor simple frequency ratios derived from the harmonic (or overtone) series. Furthermore, we show that this frequency selection results not from physical constraints governing peripheral production mechanisms but from active selection at a central level. These data provide the most rigorous empirical evidence to date of a bird song that makes use of the same mathematical principles that underlie Western and many non-Western musical scales, demonstrating surprising convergence between human and animal “song cultures.” Although there is no evidence that the songs of most bird species follow the overtone series, our findings add to a small but growing body of research showing that a preference for small-integer frequency ratios is not unique to humans. These findings thus have important implications for current debates about the origins of human musical systems and may call for a reevaluation of existing theories of musical consonance based on specific human vocal characteristics.


Frontiers in Psychology | 2014

Pitch enhancement facilitates word learning across visual contexts

Piera Filippi; Bruno Gingras; W. Tecumseh Fitch

This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.

Collaboration


Dive into the Bruno Gingras's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge