Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurent Charlin is active.

Publication


Featured researches published by Laurent Charlin.


empirical methods in natural language processing | 2016

How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation

Chia-Wei Liu; Ryan Lowe; Iulian Vlad Serban; Michael Noseworthy; Laurent Charlin; Joelle Pineau

We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a models generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems.


conference on recommender systems | 2016

Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence

Dawen Liang; Jaan Altosaar; Laurent Charlin; David M. Blei

Matrix factorization (MF) models and their extensions are standard in modern recommender systems. MF models decompose the observed user-item interaction matrix into user and item latent factors. In this paper, we propose a co-factorization model, CoFactor, which jointly decomposes the user-item interaction matrix and the item-item co-occurrence matrix with shared item latent factors. For each pair of items, the co-occurrence matrix encodes the number of users that have consumed both items. CoFactor is inspired by the recent success of word embedding models (e.g., word2vec) which can be interpreted as factorizing the word co-occurrence matrix. We show that this model significantly improves the performance over MF models on several datasets with little additional computational overhead. We provide qualitative results that explain how CoFactor improves the quality of the inferred factors and characterize the circumstances where it provides the most significant improvements.


international world wide web conferences | 2016

Modeling User Exposure in Recommendation

Dawen Liang; Laurent Charlin; James McInerney; David M. Blei

Collaborative filtering analyzes user preferences for items (e.g., books, movies, restaurants, academic papers) by exploiting the similarity patterns across users. In implicit feedback settings, all the items, including the ones that a user did not consume, are taken into consideration. But this assumption does not accord with the common sense understanding that users have a limited scope and awareness of items. For example, a user might not have heard of a certain paper, or might live too far away from a restaurant to experience it. In the language of causal analysis (Imbens & Rubin, 2015), the assignment mechanism (i.e., the items that a user is exposed to) is a latent variable that may change for various user/item combinations. In this paper, we propose a new probabilistic approach that directly incorporates user exposure to items into collaborative filtering. The exposure is modeled as a latent variable and the model infers its value from data. In doing so, we recover one of the most successful state-of-the-art approaches as a special case of our model (Hu et al. 2008), and provide a plug-in method for conditioning exposure on various forms of exposure covariates (e.g., topics in text, venue locations). We show that our scalable inference algorithm outperforms existing benchmarks in four different domains both with and without exposure covariates.


annual meeting of the special interest group on discourse and dialogue | 2016

On the Evaluation of Dialogue Systems with Next Utterance Classification.

Ryan Lowe; Iulian Vlad Serban; Michael Noseworthy; Laurent Charlin; Joelle Pineau

An open challenge in constructing dialogue systems is developing methods for automatically learning dialogue strategies from large amounts of unlabelled data. Recent work has proposed Next-Utterance-Classification (NUC) as a surrogate task for building dialogue systems from text data. In this paper we investigate the performance of humans on this task to validate the relevance of NUC as a method of evaluation. Our results show three main findings: (1) humans are able to correctly classify responses at a rate much better than chance, thus confirming that the task is feasible, (2) human performance levels vary across task domains (we consider 3 datasets) and expertise levels (novice vs experts), thus showing that a range of performance is possible on this type of task, (3) automated dialogue systems built using state-of-the-art machine learning methods have similar performance to the human novices, but worse than the experts, thus confirming the utility of this class of tasks for driving further research in automated dialogue systems.


knowledge discovery and data mining | 2014

Leveraging user libraries to bootstrap collaborative filtering

Laurent Charlin; Richard S. Zemel; Hugo Larochelle

We introduce a novel graphical model, the collaborative score topic model (CSTM), for personal recommendations of textual documents. CSTMs chief novelty lies in its learned model of individual libraries, or sets of documents, associated with each user. Overall, CSTM is a joint directed probabilistic model of user-item scores (ratings), and the textual side information in the user libraries and the items. Creating a generative description of scores and the text allows CSTM to perform well in a wide variety of data regimes, smoothly combining the side information with observed ratings as the number of ratings available for a given user ranges from none to many. Experiments on real-world datasets demonstrate CSTMs performance. We further demonstrate its utility in an application for personal recommendations of posters which we deployed at the NIPS 2013 conference.


canadian conference on artificial intelligence | 2017

A Sparse Probabilistic Model of User Preference Data.

Matthew Smith; Laurent Charlin; Joelle Pineau

Modern recommender systems rely on user preference data to understand, analyze and provide items of interest to users. However, for some domains, collecting and sharing such data can be problematic: it may be expensive to gather data from several users, or it may be undesirable to share real user data for privacy reasons. We therefore propose a new model for generating realistic preference data. Our Sparse Probabilistic User Preference (SPUP) model produces synthetic data by sparsifying an initially dense user preference matrix generated by a standard matrix factorization model. The model incorporates aggregate statistics of the original data, such as user activity level and item popularity, as well as their interaction, to produce realistic data. We show empirically that our model can reproduce real-world datasets from different domains to a high degree of fidelity according to several measures. Our model can be used by both researchers and practitioners to generate new datasets or to extend existing ones, enabling the sound testing of new models and providing an improved form of bootstrapping in cases where limited data is available.


arXiv: Computation and Language | 2016

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

Iulian Vlad Serban; Alessandro Sordoni; Ryan Lowe; Laurent Charlin; Joelle Pineau; Aaron C. Courville; Yoshua Bengio


uncertainty in artificial intelligence | 2008

Hierarchical POMDP controller optimization by likelihood maximization

Marc Toussaint; Laurent Charlin; Pascal Poupart


neural information processing systems | 2014

Content-based recommendations with Poisson factorization

Prem Gopalan; Laurent Charlin; David M. Blei


Journal of Machine Learning Research | 2015

Deep Exponential Families

Rajesh Ranganath; Linpeng Tang; Laurent Charlin; David M. Blei

Collaboration


Dive into the Laurent Charlin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Pal

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge