Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarath Chandar is active.

Publication


Featured researches published by Sarath Chandar.


computer vision and pattern recognition | 2017

GuessWhat?! Visual Object Discovery through Multi-modal Dialogue

Harm de Vries; Florian Strub; Sarath Chandar; Olivier Pietquin; Hugo Larochelle; Aaron C. Courville

We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.


meeting of the association for computational linguistics | 2016

Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus

Iulian Vlad Serban; Alberto García-Durán; Caglar Gulcehre; Sungjin Ahn; Sarath Chandar; Aaron C. Courville; Yoshua Bengio

Over the past decade, large-scale supervised learning corpora have enabled machine learning researchers to make substantial advances. However, to this date, there are no large-scale question-answer corpora available. In this paper we present the 30M Factoid Question-Answer Corpus, an enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions. The produced question answer pairs are evaluated both by human evaluators and using automatic evaluation metrics, including well-established machine translation and sentence similarity metrics. Across all evaluation criteria the question-generation model outperforms the competing template-based baseline. Furthermore, when presented to human evaluators, the generated questions appear comparable in quality to real human-generated questions.


Neural Computation | 2016

Correlational neural networks

Sarath Chandar; Mitesh M. Khapra; Hugo Larochelle; Balaraman Ravindran

Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)–based approaches and autoencoder (AE)–based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.


north american chapter of the association for computational linguistics | 2016

Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning

Janarthanan Rajendran; Mitesh M. Khapra; Sarath Chandar; Balaraman Ravindran

Recently there has been a lot of interest in learning common representations for multiple views of data. Typically, such common representations are learned using a parallel corpus between the two views (say, 1M images and their English captions). In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say,


Neural Computation | 2017

Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes

Caglar Gulcehre; Sarath Chandar; Kyunghyun Cho; Yoshua Bengio

V_1


arXiv: Computation and Language | 2017

A Deep Reinforcement Learning Chatbot.

Iulian Vlad Serban; Chinnadhurai Sankar; Mathieu Germain; Saizheng Zhang; Zhouhan Lin; Sandeep Subramanian; Taesup Kim; Michael Pieper; Sarath Chandar; Nan Rosemary Ke; Sai Rajeswar Mudumba; Alexandre de Brébisson; Jose Sotelo; Dendi Suhubdy; Vincent Michalski; Alexandre Nguyen; Joelle Pineau; Yoshua Bengio

and


arXiv: Machine Learning | 2017

Hierarchical Memory Networks

Sarath Chandar; Sungjin Ahn; Hugo Larochelle; Pascal Vincent; Gerald Tesauro; Yoshua Bengio

V_2


arXiv: Learning | 2016

Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes.

Caglar Gulcehre; Sarath Chandar; Kyunghyun Cho; Yoshua Bengio

) but parallel data is available between each of these views and a pivot view (


arXiv: Learning | 2015

Clustering is Efficient for Approximate Maximum Inner Product Search

Alex Auvolat; Sarath Chandar; Pascal Vincent; Hugo Larochelle; Yoshua Bengio

V_3


international conference on computational linguistics | 2016

A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation.

Amrita Saha; Mitesh M. Khapra; Sarath Chandar; Janarthanan Rajendran; Kyunghyun Cho

). We propose a model for learning a common representation for

Collaboration


Dive into the Sarath Chandar's collaboration.

Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo Larochelle

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar

Balaraman Ravindran

Indian Institute of Technology Madras

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge