Gláucia Laís Salomão
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gláucia Laís Salomão.
Computer Speech & Language | 2015
Klaus R. Scherer; Johan Sundberg; Lucas Tamarit; Gláucia Laís Salomão
We examine the similarities and differences in the expression of emotion in the singing and the speaking voice. Three internationally renowned opera singers produced “vocalises” (using a schwa vowel) and short nonsense phrases in different interpretations for 10 emotions. Acoustic analyses of emotional expression in the singing samples show significant differences between the emotions. In addition to the obvious effects of loudness and tempo, spectral balance and perturbation make significant contributions (high effect sizes) to this differentiation. A comparison of the emotion-specific patterns produced by the singers in this study with published data for professional actors portraying different emotions in speech generally show a very high degree of similarity. However, singers tend to rely more than actors on the use of voice perturbation, specifically vibrato, in particular in the case of high arousal emotions. It is suggested that this may be due to by the restrictions and constraints imposed by the musical structure.
Logopedics Phoniatrics Vocology | 2009
Gláucia Laís Salomão; Johan Sundberg
The voice source differs between modal and falsetto registers, but singers often try to reduce the associated timbral differences, some even doubting that there are any. A total of 54 vowel sounds sung in falsetto and modal register by 13 male more or less experienced choir singers were analyzed by inverse filtering and electroglottography. Closed quotient, maximum flow declination rate, peak-to-peak airflow amplitude, normalized amplitude quotient, and level difference between the two lowest source spectrum partials were determined, and systematic differences were found in all singers, regardless of experience of singing. The observations seem compatible with previous observations of thicker vocal folds in modal register.
Journal of the Acoustical Society of America | 2008
Gláucia Laís Salomão; Johan Sundberg
The perception of modal and falsetto registers was analyzed in a material consisting of a total of 104 vowel sounds sung by 13 choir singers, 52 sung in modal register, and 52 in falsetto register. These vowel sounds were classified by 16 expert listeners in a forced choice test and the number of votes for modal was compared to the voice source parameters: (1) closed quotient (Q(closed)), (2) level difference between the two lowest source spectrum partials (H1-H2), (3) AC amplitude, (4) maximum flow declination rate (MFDR), and (5) normalized amplitude quotient (NAQ, AC amplitude/MFDR(*) fundamental frequency). Tones with a high value of Q(closed) and low values of H1-H2 and of NAQ were typically associated with high number of votes for modal register, and vice versa, Q(closed) showing the strongest correlation. Some singer subjects produced tones that could not be classified as either falsetto or modal register, suggesting that classification of registers is not always feasible.
Journal of Voice | 2015
Maria Södersten; Gláucia Laís Salomão; Anita McAllister; Sten Ternström
OBJECTIVES AND STUDY DESIGN Information about how patients with voice disorders use their voices in natural communicative situations is scarce. Such long-term data have for the first time been uploaded to a central database from different hospitals in Sweden. The purpose was to investigate the potential use of a large set of long-term data for establishing reference values regarding voice use in natural situations. METHODS VoxLog (Sonvox AB, Umeå, Sweden) was tested for deployment in clinical practice by speech-language pathologists working at nine hospitals in Sweden. Files from 20 patients (16 females and 4 males) with functional, organic, or neurological voice disorders and 10 vocally healthy individuals (eight females and two males) were uploaded to a remote central database. All participants had vocally demanding occupations and had been monitored for more than 2 days. The total recording time was 681 hours and 50 minutes. Data on fundamental frequency (F0, Hz), phonation time (seconds and percentage), voice sound pressure level (SPL, dB), and background noise level (dB) were analyzed for each recorded day and compared between the 2 days. Variations across each day were measured using coefficients of variation. RESULTS Average F0, voice SPL, and especially the level of background noise varied considerably for all participants across each day. Average F0 and voice SPL were considerably higher than reference values from laboratory recordings. CONCLUSIONS The use of a remote central database and strict protocols can accelerate data collection from larger groups of participants and contribute to establishing reference values regarding voice use in natural situations and from patients with voice disorders. Information about activities and voice symptoms would supplement the objective data and is recommended in future studies.
Journal of the Acoustical Society of America | 2018
Anders Friberg; Tony Lindeberg; Martin Hellwagner; Pétur Helgason; Gláucia Laís Salomão; Anders Elowsson; Guillaume Lemaitre; Sten Ternström
Vocal sound imitations provide a new challenge for understanding the coupling between articulatory mechanisms and the resulting audio. In this study, the classification of three articulatory categories, phonation, supraglottal myoelastic vibrations, and turbulence, have been modeled from audio recordings. Two data sets were assembled, consisting of different vocal imitations by four professional imitators and four non-professional speakers in two different experiments. The audio data were manually annotated by two experienced phoneticians using a detailed articulatory description scheme. A separate set of audio features was developed specifically for each category using both time-domain and spectral methods. For all time-frequency transformations, and for some secondary processing, the recently developed Auditory Receptive Fields Toolbox was used. Three different machine learning methods were applied for predicting the final articulatory categories. The result with the best generalization was found using an ensemble of multilayer perceptrons. The cross-validated classification accuracy was 96.8% for phonation, 90.8% for supraglottal myoelastic vibrations, and 89.0% for turbulence using all the 84 developed features. A final feature reduction to 22 features yielded similar results.
Eurasip Journal on Audio, Speech, and Music Processing | 2015
Florian Eyben; Gláucia Laís Salomão; Johan Sundberg; Klaus R. Scherer; Björn W. Schuller
Archive | 2008
Gláucia Laís Salomão
WEIRCLE, Workshop on Extensive and Intensive Recordings of Children\'s Language Environment | 2015
Tove Gerholm; Lisa Gustavsson; Iris-Corinna Schwarz; Ulrika Marklund; Gláucia Laís Salomão; Petter Kallioinen; S. Andersson; F. Eriksson; David Pagmar; S. Tahbaz
WEIRCLE, Paris 7-9 december 2015 | 2015
Tove Gerholm; Lisa Gustavsson; Gláucia Laís Salomão; Iris-Corinna Schwarz
PEVOC 11th PAN-EUROPEAN VOICE CONFERENCE | 2015
Gláucia Laís Salomão; Johan Sundberg