Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lucie Daubigney is active.

Publication


Featured researches published by Lucie Daubigney.


international conference on acoustics, speech, and signal processing | 2012

Off-policy learning in large-scale POMDP-based dialogue systems

Lucie Daubigney; Matthieu Geist; Olivier Pietquin

Reinforcement learning (RL) is now part of the state of the art in the domain of spoken dialogue systems (SDS) optimisation. Most performant RL methods, such as those based on Gaussian Processes, require to test small changes in the policy to assess them as improvements or degradations. This process is called on policy learning. Nevertheless, it can result in system behaviours that are not acceptable by users. Learning algorithms should ideally infer an optimal strategy by observing interactions generated by a non-optimal but acceptable strategy, that is learning off-policy. Such methods usually fail to scale up and are thus not suited for real-world systems. In this contribution, a sample-efficient, online and off-policy RL algorithm is proposed to learn an optimal policy. This algorithm is combined to a compact non-linear value function representation (namely a multi-layers perceptron) enabling to handle large scale systems.


international conference on acoustics, speech, and signal processing | 2013

Random projections: A remedy for overfitting issues in time series prediction with echo state networks

Lucie Daubigney; Matthieu Geist; Olivier Pietquinz

Modelling time series is quite a difficult task. The last recent years, reservoir computing approaches have been proven very efficient for such problems. Indeed, thanks to recurrence in the connections between neurons, this approach is a powerful tool to catch and model time dependencies between samples. Yet, the prediction quality often depends on the trade-off between the number of neurons in the reservoir and the amount of training data. Supposedly, the larger the number of neurons, the richer the reservoir of dynamics. However, the risk of overfitting problem appears. Conversely, the lower the number of neurons is, the lower the risk of overfitting problem is but also the poorer the reservoir of dynamics is. We consider here the combination of an echo state network with a projection method to benefit from the advantages of the reservoir computing approach without needing to pay attention to overfitting problems due to a lack of training data.


conference of the international speech communication association | 2011

Uncertainty management for on-line optimisation of a POMDP-based large-scale spoken dialogue system

Lucie Daubigney; Milica Gasic; Senthilkumar Chandramohan; Matthieu Geist; Olivier Pietquin; Steve J. Young


the european symposium on artificial neural networks | 2011

Single-trial P300 detection with Kalman ltering and SVMs

Lucie Daubigney; Olivier Pietquin


annual meeting of the special interest group on discourse and dialogue | 2013

Model-free POMDP optimisation of tutoring systems with echo-state networks

Lucie Daubigney; Matthieu Geist; Olivier Pietquin


conference of the international speech communication association | 2013

Particle Swarm Optimisation of Spoken Dialogue System Strategies

Lucie Daubigney; Matthieu Geist; Olivier Pietquin


symposium on languages, applications and technologies | 2011

Optimization of a tutoring system from a fixed set of data.

Olivier Pietquin; Lucie Daubigney; Matthieu Geist


Journées Francophones de Plannification, Décision et Apprentissage (JFPDA) | 2013

Optimisation par essaims particulaires de stratégies de dialogue

Lucie Daubigney; Matthieu Geist; Olivier Pietquin


JFPDA 2011 | 2011

Gestion de l'incertitude pour l'optimisation en ligne d'un gestionnaire de dialogues parlés à grande échelle basé sur les POMDP

Lucie Daubigney; Senthilkumar Chandramohan; Matthieu Geist; Olivier Pietquin


EIAH 2011 | 2011

Apprentissage par renforcement pour la personnalisation d'un logiciel d'enseignement des langues

Lucie Daubigney; Matthieu Geist; Olivier Pietquin

Collaboration


Dive into the Lucie Daubigney's collaboration.

Top Co-Authors

Avatar

Olivier Pietquin

Institut Universitaire de France

View shared research outputs
Top Co-Authors

Avatar

Milica Gasic

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge