Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where María Inés Torres is active.

Publication


Featured researches published by María Inés Torres.


international conference on acoustics, speech, and signal processing | 2007

Speech Translation with Phrase Based Stochastic Finite-State Transducers

Alicia Pérez; María Inés Torres; Francisco Casacuberta

Stochastic finite-state transducers constitute a type of word-based models that allow an easy integration with acoustic model for speech translation. The aim of this work is to develop a novel approach to phrase-based statistical finite-state transducers. In this work, we explore the use of linguistically motivated phrases to build phrase-based models. The proposed phrase-based transducer has been tested and compared to a word-based equivalent machine, yielding promising results in the reported preliminary text and speech translation experiments.


IWSDS | 2017

A Multi-lingual Evaluation of the vAssist Spoken Dialog System. Comparing Disco and RavenClaw

Javier Mikel Olaso; Pierrick Milhorat; Julia Himmelsbach; Jérôme Boudy; Gérard Chollet; Stephan Schlögl; María Inés Torres

vAssist (Voice Controlled Assistive Care and Communication Services for the Home) is a European project for which several research institutes and companies have been working on the development of adapted spoken interfaces to support home care and communication services. This paper describes the spoken dialog system that has been built. Its natural language understanding module includes a novel reference resolver and it introduces a new hierarchical paradigm to model dialog tasks. The user-centered approach applied to the whole development process led to the setup of several experiment sessions with real users. Multilingual experiments carried out in Austria, France and Spain are described along with their analyses and results in terms of both system performance and user experience. An additional experimental comparison of the RavenClaw and Disco-LFF dialog managers built into the vAssist spoken dialog system highlighted similar performance and user acceptance.


international conference on advanced technologies for signal and image processing | 2016

The Roberta IRONSIDE project: A dialog capable humanoid personal assistant in a wheelchair for dependent persons

Hugues Sansen; María Inés Torres; Gérard Chollet; Cornelius Glackin; Dijana Petrovska-Delacrétaz; Jérôme Boudy; Atta Badii; Stephan Schlögl

With an aging population and the financial difficulties of having a full time caregiver for every dependent person living at home, assistant robots appear to be a solution for advanced countries. However, most of what can be done with a robot can be done without it. So it is difficult to quantify what real value an assistant robot can add. Such a robot should be a real assistant capable of helping a person, whether indoors or outdoors. Additionally, the robot should be a companion for dialoging, as well as a system capable of detecting health problems. The Roberta Ironside project is a robotic evolution, embodying the expertise learned during the development of pure vocal personal assistants for dependent persons during the vAssist project (Sansen et al. 2014). The project proposes a relatively affordable and simplified design of a human-sized humanoid robot that fits the requirements of this analysis. After an overall description of the robot, the justification of the novel choice of a handicapped robot in an electric wheel-chair, this paper emphasizes the technology that is used for the head and the face and the subsequent verbal and non-verbal communication capabilities of the robot, in turn highlighting the characteristics of Embodied Conversational Agents.


iberian conference on pattern recognition and image analysis | 2015

Combining Statistical and Semantic Knowledge for Sarcasm Detection in Online Dialogues

José M. Alcaide; Raquel Justo; María Inés Torres

The detection of secondary emotions, like sarcasm, in online dialogues is a difficult task that has rarely been treated in the literature. In this work (This work has been partially supported by the Spanish Ministry of Science under grant TIN2011-28169-C05-04, and by the Basque Government under grant IT685-13.), we tackle this problem as an affective pattern recognition problem. Specifically, we consider different kind of information sources (statistical and semantic) and propose alternative ways of combining them. We also provide a comparison of a Support Vector Machine (SVM) classification method with a simpler Naive Bayes parametric classifier. The experimental results show that combining statistical and semantic feature sets comparable performances can be achieved with Naive Bayes and SVM classifiers.


Eating and Weight Disorders-studies on Anorexia Bulimia and Obesity | 2003

Body mass index and some psychopathological symptoms in open community nuns

J. A. Guisado Macías; F. J. Vaz; J. Guisado; María Inés Torres; D. Peral; M.A. Fernández-Gil

This article examines the connections between body weight and psychopathological symptoms in a religious community. The Symptom Checklist 90-Revised was administered to 34 nuns, whose body mass index (BMI) values significantly correlated with hostility (r=0.46, p<0.01). These findings support the idea that people living in open religious communities share social values regarding weight and body size, and reveal high levels of psychological discomfort when body weight increases.


IWSDS | 2017

Entropy-Driven Dialog for Topic Classification: Detecting and Tackling Uncertainty

Manex Serras; Naiara Perez; María Inés Torres; Arantza del Pozo

A frequent difficulty faced by developers of Dialog Systems is the absence of a corpus of conversations to model the dialog statistically. Even when such a corpus is available, neither an agenda nor a statistically-based dialog control logic are options if the domain knowledge is broad. This article presents a module that automatically generates system-turn utterances to guide the user through the dialog. These system-turns are not established beforehand, and vary with each dialog. In particular, the task defined in this paper is the automation of a call-routing service. The proposed module is used when the user has not given enough information to route the call with high confidence. Doing so, and using the generated system-turns, the obtained information is improved through the dialog. The article focuses on the development and operation of this module, which is valid for agenda-based and statistical approaches, being applicable in both types of corpora.


International Workshop on Future and Emerging Trends in Language Technology | 2016

LifeLine Dialogues with Roberta

Asier López; Ahmed Ratni; Trung Ngo Trong; Javier Mikel Olaso; Seth Montenegro; Minha Lee; Fasih Haider; Stephan Schlögl; Gérard Chollet; Kristiina Jokinen; D. Petrovska-Delacretaz; Hugues Sansen; María Inés Torres

This paper describes work on dialogue data collection and dialogue system design for personal assistant humanoid robots undertaken at eNTERFACE 2016. The emphasis has been on the system’s speech capabilities and dialogue modeling of what we call LifeLine Dialogues, i.e. dialogues that help people tell stories about their lives. The main goal behind this type of application is to help elderly people exercise their speech and memory capabilities. The system further aims at acquiring a good level of knowledge about the person’s interests and thus is expected to feature open-domain conversations, presenting useful and interesting information to the user. The novel contributions of this work are: (1) a flexible spoken dialogue system that extends the Ravenclaw-type agent-based dialogue management model with topic management and multi-modal capabilities, especially with face recognition technologies, (2) a collection of WOZ-data related to initial encounters and presentation of information to the user, and (3) the establishment of a closer conversational relationship with the user by utilizing additional data (e.g. context, dialogue history, emotions, user goals, etc.).


text speech and dialogue | 2010

Dialogue system based on EDECÁN architecture

Javier Mikel Olaso; María Inés Torres

Interactive and multimodal interfaces have been proved of help in human-machine interactive systems such as dialogue systems. Facial animation, specifically lips motion, helps to make speech comprehensible and dialogue turns intuitive. The dialogue system under consideration consists of a stand that allows to get current and past news published on the Internet by several newspapers and sites, and also to get information about the weather, initially of Spanish cities, although it can be easily extended to other cities around the world. The final goal is to provide with valuable information and entertainment to people queuing or just passing around. The system aims, as well, at disabled people thanks to the different multi-modal input/outputs taken into consideration. In this work are described the diferent modules that are part of the dialogue system. These modules where developed under EDECAN architecture specifications.


Archive | 2019

Tracking the Expression of Annoyance in Call Centers

Jon Irastorza; María Inés Torres

Machine learning researchers have dealt with the identification of emotional cues from speech since it is research domain showing a large number of potential applications. Many acoustic parameters have been analyzed when searching for cues to identify emotional categories. Then classical classifiers and also outstanding computational approaches have been developed. Experiments have been carried out mainly over induced emotions, even if recently research is shifting to work over spontaneous emotions. In such a framework, it is worth mentioning that the expression of spontaneous emotions depends on cultural factors, on the particular individual and also on the specific situation. In this work, we were interested in the emotional shifts during conversation. In particular we were aimed to track the annoyance shifts appearing in phone conversations to complaint services. To this end we analyzed a set of audio files showing different ways to express annoyance. The call center operators found disappointment, impotence or anger as expression of annoyance. However, our experiments showed that variations of parameters derived from intensity combined with some spectral information and suprasegmental features are very robust for each speaker and annoyance rate. The work also discussed the annotation problem arising when dealing with human labelling of subjective events. In this work we proposed an extended rating scale in order to include annotators disagreements. Our frame classification results validated the chosen annotation procedure. Experimental results also showed that shifts in customer annoyance rates could be potentially tracked during phone calls.


IWSDS | 2019

Regularized Neural User Model for Goal-Oriented Spoken Dialogue Systems

Manex Serras; María Inés Torres; Arantza del Pozo

User simulation is widely used to generate artificial dialogues in order to train statistical spoken dialogue systems and perform evaluations. This paper presents a neural network approach for user modeling that exploits an encoder-decoder bidirectional architecture with a regularization layer for each dialogue act. In order to minimize the impact of data sparsity, the dialogue act space is compressed according to the user goal. Experiments on the Dialogue State Tracking Challenge 2 (DSTC2) dataset provide significant results at dialogue act and slot level predictions, outperforming previous neural user modeling approaches in terms of F1 score.

Collaboration


Dive into the María Inés Torres's collaboration.

Top Co-Authors

Avatar

Javier Mikel Olaso

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephan Schlögl

MCI Management Center Innsbruck

View shared research outputs
Top Co-Authors

Avatar

Raquel Justo

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alicia Pérez

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Encarna Segarra

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Javier Ferreiros

Technical University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge