Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pierrick Milhorat is active.

Publication


Featured researches published by Pierrick Milhorat.


international conference on advanced technologies for signal and image processing | 2014

Building the next generation of personal digital Assistants

Pierrick Milhorat; Stephan Schlögl; Gérard Chollet; Jérôme Boudy; Anna Esposito; G. Pelosi

Voice-based digital Assistants such as Apples Siri and Googles Now are currently booming. Yet, despite their promise of being context-aware and adapted to a users preferences and very distinct needs, truly personal assistants are still missing. In this paper we highlight some of the challenges in building personalized speech-operated assistive technology and propose a number of research and development directions we have undertaken in order to solve them. In particular we focus on natural language understanding and dialog management aspects as we believe that these parts of the technology pipeline require the biggest amount of augmentation.


annual meeting of the special interest group on discourse and dialogue | 2016

Talking with ERICA, an autonomous android

Koji Inoue; Pierrick Milhorat; Divesh Lala; Tianyu Zhao; Tatsuya Kawahara

We demonstrate dialogues with an autonomous android ERICA, who has an appearance like a human being. Currently, ERICA plays two social roles: a laboratory guide and a counselor. It is designed to follow the protocols of human dialogue to make the user comfortable: (1) having a chat before the main talk, (2) proactively asking questions, and (3) conveying proper feedbacks. The combination of the human-like appearance and the appropriate behaviors according to her social roles allows for symbiotic human-robot interaction.


Computer Graphics and Imaging | 2013

USING WIZARD OF OZ TO COLLECT INTERACTION DATA FOR VOICE CONTROLLED HOME CARE AND COMMUNICATION SERVICES

Stephan Schlögl; Gérard Chollet; Pierrick Milhorat; Jirasri Deslis; Jacques Feldmar; Jérôme Boudy; Markus Garschall; Manfred Tscheligi

This research aims at providing Voice controlled As- sistive (vAssist) Care and Communication Services for the Home to seniors suffering from fine-motor problems and/or chronic diseases. The constantly growing life expectancy of the European population increasingly asks for techno- logical products that help seniors to manage their activities of daily living. In particular, we require solutions which offer interaction paradigms that fit the cognitive abilities of elderly users. Natural language-based access can be seen as one way of increasing the usability of these services. Yet, the construction of robust language technologies such as Automatic Speech Recognition and Natural Language Understanding does require sufficient domain specific in- teraction data. In this paper we describe how we plan to obtain the relevant corpus data for a set of different applica- tion scenarios, using the Wizard of Oz (WOZ) prototyping method. Using a publicly available WOZ tool we discuss how the integration of existing language technologies with a human wizard may help in designing a natural user inter- face for seniors and how such has the potential to underpin an iterative user-centred development process for language- based applications.


IWSDS | 2017

A Multi-lingual Evaluation of the vAssist Spoken Dialog System. Comparing Disco and RavenClaw

Javier Mikel Olaso; Pierrick Milhorat; Julia Himmelsbach; Jérôme Boudy; Gérard Chollet; Stephan Schlögl; María Inés Torres

vAssist (Voice Controlled Assistive Care and Communication Services for the Home) is a European project for which several research institutes and companies have been working on the development of adapted spoken interfaces to support home care and communication services. This paper describes the spoken dialog system that has been built. Its natural language understanding module includes a novel reference resolver and it introduces a new hierarchical paradigm to model dialog tasks. The user-centered approach applied to the whole development process led to the setup of several experiment sessions with real users. Multilingual experiments carried out in Austria, France and Spain are described along with their analyses and results in terms of both system performance and user experience. An additional experimental comparison of the RavenClaw and Disco-LFF dialog managers built into the vAssist spoken dialog system highlighted similar performance and user acceptance.


engineering interactive computing system | 2013

What if everyone could do it?: a framework for easier spoken dialog system design

Pierrick Milhorat; Stephan Schlögl; Gérard Chollet; Jérôme Boudy

While Graphical User Interfaces (GUI) still represent the most common way of operating modern computing technology, Spoken Dialog Systems (SDS) have the potential to offer a more natural and intuitive mode of interaction. Even though some may say that existing speech recognition is neither reliable nor practical, the success of recent product releases such as Apples Siri or Nuances Dragon Drive suggests that language-based interaction is increasingly gaining acceptance. Yet, unlike applications for building GUIs, tools and frameworks that support the design, construction and maintenance of dialog systems are rare. A particular challenge of SDS design is the often complex integration of technologies. Systems usually consist of several components (e.g. speech recognition, language understanding, output generation, etc.), all of which require expertise to deploy them in a given application domain. This paper presents work in progress that aims at supporting this integration process. We propose a framework of components and describe how it may be used to prototype and gradually implement a spoken dialog system without requiring extensive domain expertise.


IWSDS | 2019

A Conversational Dialogue Manager for the Humanoid Robot ERICA

Pierrick Milhorat; Divesh Lala; Koji Inoue; Tianyu Zhao; Masanari Ishida; Katsuya Takanashi; Shizuka Nakamura; Tatsuya Kawahara

We present a dialogue system for a conversational robot, Erica. Our goal is for Erica to engage in more human-like conversation, rather than being a simple question-answering robot. Our dialogue manager integrates question-answering with a statement response component which generates dialogue by asking about focused words detected in the user’s utterance, and a proactive initiator which generates dialogue based on events detected by Erica. We evaluate the statement response component and find that it produces coherent responses to a majority of user utterances taken from a human-machine dialogue corpus. An initial study with real users also shows that it reduces the number of fallback utterances by half. Our system is beneficial for producing mixed-initiative conversation.


conference of the european chapter of the association for computational linguistics | 2014

Designing Language Technology Applications: A Wizard of Oz Driven Prototyping Framework

Stephan Schlögl; Pierrick Milhorat; Gérard Chollet; Jérôme Boudy

Wizard of Oz (WOZ) prototyping employs a human wizard to simulate anticipated functions of a future system. In Natural Language Processing this method is usually used to obtain early feedback on dialogue designs, to collect language corpora, or to explore interaction strategies. Yet, existing tools often require complex client-server configurations and setup routines, or suffer from compatibility problems with different platforms. Integrated solutions, which may also be used by designers and researchers without technical background, are missing. In this paper we present a framework for multi-lingual dialog research, which combines speech recognition and synthesis with WOZ. All components are open source and adaptable to different application scenarios.


biomedical engineering systems and technologies | 2014

vAssist: Building the Personal Assistant for Dependent People

Hugues Sansen; J-l. Baldinger; J. Boudy; Gérard Chollet; Pierrick Milhorat; Stephan Schlögl

Modern ICT solutions are capable of assisting dependent people at home and therefore able to replace the physical presence of a caregiver. However, the success of such solutions depends on an intuitive access to services. By proposing a speech-operated system and devices that facilitate this voice-based interaction, vAssist aims at a solution that corresponds to a virtual butler. The goal is to build a system with whom elderly users can interact naturally and even build up a social connection. Integrating modern language technology with a human-operated call center should allow for coping with current imperfect solutions and consequently offer the necessary reliability and user experience. vAssist is planned to be launched for German, Italian and French.


international conference on pattern recognition applications and methods | 2016

Experiments on Adaptation Methods to Improve Acoustic Modeling for French Speech Recognition

Saeideh Mirzaei; Pierrick Milhorat; Jérôme Boudy; Gérard Chollet; Mikko Kurimo

To improve the performance of Automatic Speech Recognition (ASR) systems, the models must be retrained in order to better adjust to the speakerâ??s voice characteristics, the environmental and channel conditions or the context of the task. In this project we focus on the mismatch between the acoustic features used to train the model and the vocal characteristics of the front-end user of the system. To overcome this mismatch, speaker adaptation techniques have been used. A significant performance improvement has been shown using using constrained Maximum Likelihood Linear Regression (cMLLR) model adaptation methods, while a fast adaptation is guaranteed by using linear Vocal Tract Length Normalization (lVTLN).We have achieved a relative gain of approximately 9.44% in the word error rate with unsupervised cMLLR adaptation. We also compare our ASR system with the Google ASR and show that, using adaptation methods, we exceed its performance.


international conference on multimodal interfaces | 2016

Multimodal interaction with the autonomous Android ERICA

Divesh Lala; Pierrick Milhorat; Koji Inoue; Tianyu Zhao; Tatsuya Kawahara

We demonstrate an interactive conversation with an android named ERICA. In this demonstration the user can converse with ERICA on a number of topics. We demonstrate both the dialog management system and the eye gaze behavior of ERICA used for indicating attention and turn taking.

Collaboration


Dive into the Pierrick Milhorat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jérôme Boudy

Institut Mines-Télécom

View shared research outputs
Top Co-Authors

Avatar

Stephan Schlögl

MCI Management Center Innsbruck

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Istrate

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge