Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas Usunier is active.

Publication


Featured researches published by Nicolas Usunier.


international joint conference on artificial intelligence | 2011

WSABIE: scaling up to large vocabulary image annotation

Jason Weston; Samy Bengio; Nicolas Usunier

Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory.


european conference on machine learning | 2010

Large scale image annotation: learning to rank with joint word-image embeddings

Jason Weston; Samy Bengio; Nicolas Usunier

Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.


european conference on machine learning | 2014

Open Question Answering with Weakly Supervised Embedding Models

Antoine Bordes; Jason Weston; Nicolas Usunier

Building computers able to answer questions on any subject is a long standing goal of artificial intelligence. Promising progress has recently been achieved by methods that learn to map questions to logical forms or database queries. Such approaches can be effective but at the cost of either large amounts of human-labeled data or by defining lexicons and grammars tailored by practitioners. In this paper, we instead take the radical approach of learning to map questions to vectorial feature representations. By mapping answers into the same space one can query any knowledge base independent of its schema, without requiring any grammar or lexicon. Our method is trained with a new optimization procedure combining stochastic gradient descent followed by a fine-tuning step using the weak supervision provided by blending automatically and collaboratively generated resources. We empirically demonstrate that our model can capture meaningful signals from its noisy supervision leading to major improvements over paralex, the only existing method able to be trained on similar weakly labeled data.


european conference on machine learning | 2008

Sequence labelling SVMs trained in one pass

Antoine Bordes; Nicolas Usunier; Léon Bottou

This paper proposes an online solver of the dual formulation of support vector machines for structured output spaces. We apply it to sequence labelling using the exact and greedy inference schemes. In both cases, the per-sequence training time is the same as a perceptron based on the same inference procedure, up to a small multiplicative constant. Comparing the two inference schemes, the greedy version is much faster. It is also amenable to higher order Markov assumptions and performs similarly on test. In comparison to existing algorithms, both versions match the accuracies of batch solvers that use exact inference after a single pass over the training examples.


empirical methods in natural language processing | 2015

Composing Relationships with Translations

Alberto Garcı́a-Durán; Antoine Bordes; Nicolas Usunier

Performing link prediction in Knowledge Bases (KBs) with embedding-based models, like with the model TransE (Bordes et al., 2013) which represents relationships as translations in the embedding space, have shown promising results in recent years. Most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of KBs. In this paper, we propose an extension of TransE that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors. We show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them.


Journal of Artificial Intelligence Research | 2016

Combining two and three-way embedding models for link prediction in knowledge bases

Alberto García-Durán; Antoine Bordes; Nicolas Usunier; Yves Grandvalet

This paper tackles the problem of endogenous link prediction for knowledge base completion. Knowledge bases can be represented as directed graphs whose nodes correspond to entities and edges to relationships. Previous attempts either consist of powerful systems with high capacity to model complex connectivity patterns, which unfortunately usually end up overfitting on rare relationships, or in approaches that trade capacity for simplicity in order to fairly model all relationships, frequent or not. In this paper, we propose TATEC, a happy medium obtained by complementing a high-capacity model with a simpler one, both pre-trained separately and then combined. We present several variants of this model with different kinds of regularization and combination strategies and show that this approach outperforms existing methods on different types of relationships by achieving state-of-the-art results on four benchmarks of the literature.


conference on recommender systems | 2012

Ranking with non-random missing ratings: influence of popularity and positivity on evaluation metrics

Bruno Pradel; Nicolas Usunier; Patrick Gallinari

The evaluation of recommender systems in terms of ranking has recently gained attention, as it seems to better fit the top-k recommendation task than the usual ratings prediction task. In that context, several authors have proposed to consider missing ratings as some form of negative feedback to compensate for the skewed distribution of observed ratings when users choose the items they rate. In this work, we study two major biases of the selection of items: the first one is that some items obtain more ratings than others (popularity effect), and the second one is that positive ratings are observed more frequently than negative ratings (positivity effect). We present a theoretical analysis and experiments on the Yahoo! dataset with randomly selected items, which show that considering missing data as a form of negative feedback during training may improve performances, but also that it can be misleading when testing, favoring models of popularity more than models of user preferences.


european conference on machine learning | 2014

Effective Blending of Two and Three-way Interactions for Modeling Multi-relational Data

Alberto Garcia-Duran; Antoine Bordes; Nicolas Usunier

Much work has been recently proposed to model relational data, especially in the multi-relational case, where di↵erent kinds of relationships are used to connect the various data entities. Previous attempts either consist of powerful systems with high capacity to model complex connectivity patterns, which unfortunately usually end up overfitting on rare relationships, or in approaches that trade capacity for simplicity in order to fairly model all relationships, frequent or not. In this paper, we propose a happy medium obtained by complementing a high-capacity model with a simpler one, both pre-trained separately and jointly fine-tuned. We show that our approach outperforms existing models on different types of relationships, and achieves state-of-the-art results on two benchmarks of the literature.


conference on recommender systems | 2016

A Coverage-Based Approach to Recommendation Diversity On Similarity Graph

Shameem Ahamed Puthiya Parambath; Nicolas Usunier; Yves Grandvalet

We consider the problem of generating diverse, personalized recommendations such that a small set of recommended items covers a broad range of the users interests. We represent items in a similarity graph, and we formulate the relevance/diversity trade-off as finding a small set of unrated items that best covers a subset of items positively rated by the user. In contrast to previous approaches, our method does not rely on an explicit trade-off between a relevance objective and a diversity objective, as the estimations of relevance and diversity are implicit in the coverage criterion. We show on several benchmark datasets that our approach compares favorably to the state-of-the-art diversification methods according to various relevance and diversity measures.


international conference on acoustics, speech, and signal processing | 2017

How should we evaluate supervised hashing

Alexandre Sablayrolles; Matthijs Douze; Nicolas Usunier; Hervé Jégou

Hashing produces compact representations for documents, to perform tasks like classification or retrieval based on these short codes. When hashing is supervised, the codes are trained using labels on the training data. This paper first shows that the evaluation protocols used in the literature for supervised hashing are not satisfactory: we show that a trivial solution that encodes the output of a classifier significantly outperforms existing supervised or semi-supervised methods, while using much shorter codes. We then propose two alternative protocols for supervised hashing: one based on retrieval on a disjoint set of classes, and another based on transfer learning to new classes. We provide two baseline methods for image-related tasks to assess the performance of (semi-)supervised hashing: without coding and with unsupervised codes. These baselines give a lower- and upper-bound on the performance of a supervised hashing scheme.

Collaboration


Dive into the Nicolas Usunier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Garcia-Duran

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Yves Grandvalet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shameem Ahamed Puthiya Parambath

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge