Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gláucia Laís Salomão is active.

Publication


Featured researches published by Gláucia Laís Salomão.


Computer Speech & Language | 2015

Comparing the acoustic expression of emotion in the speaking and the singing voice

Klaus R. Scherer; Johan Sundberg; Lucas Tamarit; Gláucia Laís Salomão

We examine the similarities and differences in the expression of emotion in the singing and the speaking voice. Three internationally renowned opera singers produced “vocalises” (using a schwa vowel) and short nonsense phrases in different interpretations for 10 emotions. Acoustic analyses of emotional expression in the singing samples show significant differences between the emotions. In addition to the obvious effects of loudness and tempo, spectral balance and perturbation make significant contributions (high effect sizes) to this differentiation. A comparison of the emotion-specific patterns produced by the singers in this study with published data for professional actors portraying different emotions in speech generally show a very high degree of similarity. However, singers tend to rely more than actors on the use of voice perturbation, specifically vibrato, in particular in the case of high arousal emotions. It is suggested that this may be due to by the restrictions and constraints imposed by the musical structure.


Logopedics Phoniatrics Vocology | 2009

What do male singers mean by modal and falsetto register? An investigation of the glottal voice source

Gláucia Laís Salomão; Johan Sundberg

The voice source differs between modal and falsetto registers, but singers often try to reduce the associated timbral differences, some even doubting that there are any. A total of 54 vowel sounds sung in falsetto and modal register by 13 male more or less experienced choir singers were analyzed by inverse filtering and electroglottography. Closed quotient, maximum flow declination rate, peak-to-peak airflow amplitude, normalized amplitude quotient, and level difference between the two lowest source spectrum partials were determined, and systematic differences were found in all singers, regardless of experience of singing. The observations seem compatible with previous observations of thicker vocal folds in modal register.


Journal of the Acoustical Society of America | 2008

Relation between perceived voice register and flow glottogram parameters in males

Gláucia Laís Salomão; Johan Sundberg

The perception of modal and falsetto registers was analyzed in a material consisting of a total of 104 vowel sounds sung by 13 choir singers, 52 sung in modal register, and 52 in falsetto register. These vowel sounds were classified by 16 expert listeners in a forced choice test and the number of votes for modal was compared to the voice source parameters: (1) closed quotient (Q(closed)), (2) level difference between the two lowest source spectrum partials (H1-H2), (3) AC amplitude, (4) maximum flow declination rate (MFDR), and (5) normalized amplitude quotient (NAQ, AC amplitude/MFDR(*) fundamental frequency). Tones with a high value of Q(closed) and low values of H1-H2 and of NAQ were typically associated with high number of votes for modal register, and vice versa, Q(closed) showing the strongest correlation. Some singer subjects produced tones that could not be classified as either falsetto or modal register, suggesting that classification of registers is not always feasible.


Journal of Voice | 2015

Natural Voice Use in Patients With Voice Disorders and Vocally Healthy Speakers Based on 2 Days Voice Accumulator Information From a Database

Maria Södersten; Gláucia Laís Salomão; Anita McAllister; Sten Ternström

OBJECTIVES AND STUDY DESIGN Information about how patients with voice disorders use their voices in natural communicative situations is scarce. Such long-term data have for the first time been uploaded to a central database from different hospitals in Sweden. The purpose was to investigate the potential use of a large set of long-term data for establishing reference values regarding voice use in natural situations. METHODS VoxLog (Sonvox AB, Umeå, Sweden) was tested for deployment in clinical practice by speech-language pathologists working at nine hospitals in Sweden. Files from 20 patients (16 females and 4 males) with functional, organic, or neurological voice disorders and 10 vocally healthy individuals (eight females and two males) were uploaded to a remote central database. All participants had vocally demanding occupations and had been monitored for more than 2 days. The total recording time was 681 hours and 50 minutes. Data on fundamental frequency (F0, Hz), phonation time (seconds and percentage), voice sound pressure level (SPL, dB), and background noise level (dB) were analyzed for each recorded day and compared between the 2 days. Variations across each day were measured using coefficients of variation. RESULTS Average F0, voice SPL, and especially the level of background noise varied considerably for all participants across each day. Average F0 and voice SPL were considerably higher than reference values from laboratory recordings. CONCLUSIONS The use of a remote central database and strict protocols can accelerate data collection from larger groups of participants and contribute to establishing reference values regarding voice use in natural situations and from patients with voice disorders. Information about activities and voice symptoms would supplement the objective data and is recommended in future studies.


Journal of the Acoustical Society of America | 2018

Prediction of three articulatory categories in vocal sound imitations using models for auditory receptive fields

Anders Friberg; Tony Lindeberg; Martin Hellwagner; Pétur Helgason; Gláucia Laís Salomão; Anders Elowsson; Guillaume Lemaitre; Sten Ternström

Vocal sound imitations provide a new challenge for understanding the coupling between articulatory mechanisms and the resulting audio. In this study, the classification of three articulatory categories, phonation, supraglottal myoelastic vibrations, and turbulence, have been modeled from audio recordings. Two data sets were assembled, consisting of different vocal imitations by four professional imitators and four non-professional speakers in two different experiments. The audio data were manually annotated by two experienced phoneticians using a detailed articulatory description scheme. A separate set of audio features was developed specifically for each category using both time-domain and spectral methods. For all time-frequency transformations, and for some secondary processing, the recently developed Auditory Receptive Fields Toolbox was used. Three different machine learning methods were applied for predicting the final articulatory categories. The result with the best generalization was found using an ensemble of multilayer perceptrons. The cross-validated classification accuracy was 96.8% for phonation, 90.8% for supraglottal myoelastic vibrations, and 89.0% for turbulence using all the 84 developed features. A final feature reduction to 22 features yielded similar results.


Eurasip Journal on Audio, Speech, and Music Processing | 2015

Emotion in the singing voice—a deeperlook at acoustic features in the light ofautomatic classification

Florian Eyben; Gláucia Laís Salomão; Johan Sundberg; Klaus R. Scherer; Björn W. Schuller


Archive | 2008

Relationship between perceived vocal registers and glottal flow parameters

Gláucia Laís Salomão


WEIRCLE, Workshop on Extensive and Intensive Recordings of Children\'s Language Environment | 2015

The Swedish MINT Project modelling infant language acquisition

Tove Gerholm; Lisa Gustavsson; Iris-Corinna Schwarz; Ulrika Marklund; Gláucia Laís Salomão; Petter Kallioinen; S. Andersson; F. Eriksson; David Pagmar; S. Tahbaz


WEIRCLE, Paris 7-9 december 2015 | 2015

The Swedish MINT Project : modelling infant language acquisition from parten-child interaction

Tove Gerholm; Lisa Gustavsson; Gláucia Laís Salomão; Iris-Corinna Schwarz


PEVOC 11th PAN-EUROPEAN VOICE CONFERENCE | 2015

Emotional Coloring of the Singing Voice

Gláucia Laís Salomão; Johan Sundberg

Collaboration


Dive into the Gláucia Laís Salomão's collaboration.

Top Co-Authors

Avatar

Johan Sundberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sten Ternström

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anita McAllister

Karolinska University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anders Elowsson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anders Friberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Hellwagner

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge