Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Josué Cabrera is active.

Publication


Featured researches published by Josué Cabrera.


Expert Systems With Applications | 2015

New approach in quantification of emotional intensity from the speech signal

Jesús B. Alonso; Josué Cabrera; Manuel Medina; Carlos M. Travieso

A method for quantifying emotional intensity from speech is presented.A simple and robust alternative is presented which uses temporal segmentation.A reduced prosodic and paralinguistic features set is used.Three database of emotional speech in different language have been used. The automatic speech emotion recognition has a huge potential in applications of fields such as psychology, psychiatry and the affective computing technology. The spontaneous speech is continuous, where the emotions are expressed in certain moments of the dialogue, given emotional turns. Therefore, it is necessary that the real-time applications are capable of detecting changes in the speakers affective state. In this paper, we emphasize on recognizing activation from speech using a few feature set obtained from a temporal segmentation of the speech signal of different language like German, English and Polish. The feature set includes two prosodic features and four paralinguistic features related to the pitch and spectral energy balance. This segmentation and feature set are suitable for real-time emotion applications because they allow detect changes in the emotional state with very low processing times. The German Corpus EMO-DB (Berlin Database of Emotional Speech), the English Corpus LDC (Emotional Prosody Speech and Transcripts database) and the Polish Emotional Speech Database are used to train the Support Vector Machine (SVM) classifier and for gender-dependent activation recognition. The results are analyzed for each speech emotion with gender-dependent separately and obtained accuracies of 94.9%, 88.32% and 90% for EMO-DB, LDC and Polish databases respectively. This new approach provides a comparable performance with lower complexity than other approaches for real-time applications, thus making it an appealing alternative, may assist in the future development of automatic speech emotion recognition systems with continuous tracking.


Expert Systems With Applications | 2015

Advance in the bat acoustic identification systems based on the audible spectrum using nonlinear dynamics characterization

Jesús B. Alonso; Aarón Henríquez; Patricia Henríquez; Bernal Rodríguez-Herrera; Federico Bolaños; Priscilla Alpízar; Carlos M. Travieso; Josué Cabrera

Combination of linear and non-linear parameterization.As much as 12 chaos theory parameters are used.Simple and quick method of classification.Significant improvement of correct rate classification thanks chaos theory parameters. Frequential and time lineal parameters have shown a good performance in the recognition of bat species and nowadays there are many works which obtain those characteristics very accurately. However, it is necessary to move forward and test the capabilities of other characterizations on bioacoustics successfully used in other fields. In this work the chaos theory, which is an area of nonlinear dynamics systems, is applied to bat acoustic identification. The database used in the evaluation consists of 50 bat calls of seven different classes extracted from a previous work. The combinations of linear and nonlinear parameters have resulted in an average error of 1.8%, improving the accuracy in 0.42%. The differences to identify between the most difficult species and the easiest ones have been reduced.


international conference on signal processing | 2015

Emotional states discrimination in voice in secure environments

Josué Cabrera; Jesús B. Alonso; Carlos M. Travieso; Miguel A. Ferrer; Patricia Hernríquez; Malay Kishore Dutta; Anushikha Singh

In this paper we present the use of emotions in security. The access control security systems by speech recognition can be complemented by means of an emotions discrimination system from the speech. An emotions recognition system has the advantage of increasing the security allowing identifying if an authorized user in the system is being coerced while he executes speech identification, reflecting his speech some nervousness. In this study we use four emotional states. We use three emotional states that would produce alarm in a security system: anxiety, hot anger and panic. and a fourth emotional state that corresponds to emotions set without interest for the security, like happiness, shame, sadness, boredom or the absence of emotion (neutral), and that we have called rest. In our simulations, the proposed emotions discrimination system has obtained an average accuracy about 79%.


Neurocomputing | 2015

A study of glottal excitation synthesizers for different voice qualities

Jesús B. Alonso; Miguel A. Ferrer; Patricia Henríquez; Karmele López-de-Ipiña; Josué Cabrera; Carlos M. Travieso

Abstract The aim of this paper is to analyze the improvements that are observed in the glottal excitation synthesizers when the possible manifestations of non-linear behavior are characterized in glottal excitation. This paper proposes a new model based on the modification of a classic glottal excitation synthesizer and to study the improvements regarding different glottal excitation synthesizers. The proposed model tries to improve the naturalness of the synthesized voice using the synthesis of the sub-harmonics. The proposed model is included in a generic synthesizer of sustained vowels in order to get an assessment about the quality of the synthesis of different qualities of voice, where speakers with pathologies in the phonatory system are used to simulate the behavior of low quality voices. The different models are adjusted using genetic algorithms. The assessment of the different glottal excitation synthesizers is obtained using an objective measure of similarity between the original signals and the synthesized signals based on temporal and spectral measurements. In addition, the quality of the proposed glottal excitation model is evaluated with a study of subjective perception.


Neurocomputing | 2017

Continuous tracking of the emotion temperature

Jesús B. Alonso; Josué Cabrera; Carlos M. Travieso; Karmele López-de-Ipiña; Agustín J. Sánchez-Medina

Abstract The speech emotion recognition has a huge potential in human computer interaction applications in fields such as psychology, psychiatry and affective computing technology. The great majority of research works on speech emotion recognition have been made based on record repositories consisting of short sentences recorded under laboratory conditions. In this work, we researched the use of the Emotional Temperature strategy for continuous tracking in long-term samples of speech in which there are emotional changes during the speech. Emotional Temperature uses a few prosodic and paralinguistic features set obtained from a temporal segmentation of the speech signal. The simplicity and limitation of the set, previously validated under laboratory conditions, make it appropriate to be used under real conditions, where the spontaneous speech is continuous and the emotions are expressed in certain moments of the dialogue, given emotional turns. This strategy is robust, offers low computational cost, ability to detect emotional changes and improves the performance of a segmentation based on linguistic aspects. The German Corpus EMO-DB (Berlin Database of Emotional Speech), the English Corpus LDC (Emotional Prosody Speech and Transcripts database), the Polish Emotional Speech Database and RECOLA (Remote Collaborative and Affective Interactions) database are used to validate the system of continuous tracking from emotional speech. Two experimentation conditions are analyzed, dependence and independence on language and gender, using acted and spontaneous speech respectively. In acted conditions, the approach obtained accuracies of 67–97% while under spontaneous conditions, compared to annotation performed by human judges, accuracies of 41–50% were obtained. In comparison with previous studies in continuous emotion recognition, the approach improves the existing results with an accuracy of 9% higher on average. Therefore, this approach has a good performance with low complexity to develop real-time applications or continuous tracking emotional speech applications.


2015 4th International Work Conference on Bioinspired Intelligence (IWOBI) | 2015

First approach to continuous tracking of emotional temperature

Jesús B. Alonso; Josué Cabrera; Carlos M. Travieso; Karmele López-de-Ipiña

A wide range of new applications can arise from the emotional state assessment obtained from speech signal, which represents a marked improvement in the human-machine interfaces and becomes an important research area in the last years. The study of emotions is not a trivial task and involves a degree of difficulty. The great majority of researches on speech emotion recognition have been made on the basis of record repositories consisting short sentences recorded in laboratory conditions. In this work we propose a strategy, previously validated under the conditions described above, for continuous tracking in long-term samples of speech in which there are emotional changes during the speech. This strategy uses a few prosodic and paralinguistic features set obtained from a temporal segmentation of the speech signal, which is more appropriate in real-world scenarios. In this paper a simple and effective method of automatic discrimination between positive and negative emotional intensity speech, named Emotional Temperature, is presented. This strategy is robust, offers low computational cost, ability to detect emotional changes and improves the performance of a segmentation based on linguistic aspects.


3rd IEEE International Work-Conference on Bioinspired Intelligence | 2014

Emotional temperature

Jesús B. Alonso; Josué Cabrera; Carlos M. Travieso


Revista de Logopedia, Foniatría y Audiología | 2015

Herramienta para la evaluación acústica de la voz en entornos hospitalarios: EVALUA

José de León y de Juan; Juan Francisco Rivero Suárez; Carolina León Manaure; Felipe Jungjoham Jofre; Sergio Miranda Fandiño; Javier González González; Jesús B. Alonso; Josué Cabrera; Miguel A. Ferrer; Carlos M. Travieso


<p>II Jornadas Iberoamericanas de Innovación Educativa en al ámbito de las TIC (2015)</p> | 2015

e-Voice: sistema de evaluación remota del sistema fonador

Josué Cabrera; Jesús B. Alonso-Hernández; Carlos M. Travieso-González; Miguel Ángel Ferrer Ballester; José de León y de Juan


<p>II Jornadas Iberoamericanas de Innovación Educativa en al ámbito de las TIC (2015)</p> | 2015

Tool for biomedical signals processing

Josué Cabrera; Jesús B. Alonso-Hernández; Carlos M. Travieso-González

Collaboration


Dive into the Josué Cabrera's collaboration.

Top Co-Authors

Avatar

Carlos M. Travieso

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Jesús B. Alonso

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Miguel A. Ferrer

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Karmele López-de-Ipiña

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Carlos M. Travieso-González

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Jesús B. Alonso-Hernández

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Patricia Henríquez

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Agustín J. Sánchez-Medina

University of Las Palmas de Gran Canaria

View shared research outputs
Researchain Logo
Decentralizing Knowledge