Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian Strub is active.

Publication


Featured researches published by Florian Strub.


computer vision and pattern recognition | 2017

GuessWhat?! Visual Object Discovery through Multi-modal Dialogue

Harm de Vries; Florian Strub; Sarath Chandar; Olivier Pietquin; Hugo Larochelle; Aaron C. Courville

We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.


international joint conference on artificial intelligence | 2017

End-to-end optimization of goal-driven and visually grounded dialogue systems

Florian Strub; Harm de Vries; Jérémie Mary; Aaron C. Courville; Olivier Pietquin

End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision is too simplistic to render the intrinsic planning problem inherent to dialogue as well as its grounded nature , making the context of a dialogue larger than the sole history. This is why only chitchat and question answering tasks have been addressed so far using end-to-end architectures. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues , based on the policy gradient algorithm. This approach is tested on a dataset of 120k dialogues collected through Mechanical Turk and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture.


conference on recommender systems | 2016

Hybrid Recommender System based on Autoencoders

Florian Strub; Romaric Gaudel; Jérémie Mary

A standard model for Recommender Systems is the Matrix Completion setting: given partially known matrix of ratings given by users (rows) to items (columns), infer the unknown ratings. In the last decades, few attempts where done to handle that objective with Neural Networks, but recently an architecture based on Autoencoders proved to be a promising approach. In current paper, we enhanced that architecture (i) by using a loss function adapted to input data with missing values, and (ii) by incorporating side information. The experiments demonstrate that while side information only slightly improve the test error averaged on all users/items, it has more impact on cold users/items.


european conference on computer vision | 2018

Visual Reasoning with Multi-hop Feature Modulation

Florian Strub; Mathieu Seurin; Ethan Perez; Harm de Vries; Jérémie Mary; Philippe Preux; Aaron C. Courville; Olivier Pietquin

Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. We propose to generate the parameters of FiLM layers going up the hierarchy of a convolutional network in a multi-hop fashion rather than all at once, as in prior work. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation significantly outperforms prior state-of-the-art on the GuessWhat?! visual dialogue task and matches state-of-the art on the ReferIt object retrieval task, and we provide additional qualitative analysis.


national conference on artificial intelligence | 2018

FiLM: Visual Reasoning with a General Conditioning Layer

Ethan Perez; Florian Strub; Harm de Vries; Vincent Dumoulin; Aaron C. Courville


neural information processing systems | 2015

Collaborative Filtering with Stacked Denoising AutoEncoders and Sparse Inputs

Florian Strub; Jérémie Mary


neural information processing systems | 2017

Modulating early visual processing by language

Harm de Vries; Florian Strub; Jérémie Mary; Hugo Larochelle; Olivier Pietquin; Aaron C. Courville


international conference on machine learning | 2017

Learning Visual Reasoning Without Strong Priors

Ethan Perez; Harm de Vries; Florian Strub; Vincent Dumoulin; Aaron C. Courville


neural information processing systems | 2017

HoME: a Household Multimodal Environment

Simon Brodeur; Ethan Perez; Ankesh Anand; Florian Golemo; Luca Celotti; Florian Strub; Jean Rouat; Hugo Larochelle; Aaron C. Courville


arXiv: Information Retrieval | 2016

Hybrid Collaborative Filtering with Autoencoders

Florian Strub; Jérémie Mary; Romaric Gaudel

Collaboration


Dive into the Florian Strub's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harm de Vries

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Olivier Pietquin

Institut Universitaire de France

View shared research outputs
Top Co-Authors

Avatar

Romaric Gaudel

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo Larochelle

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge