Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ákos Kádár is active.

Publication


Featured researches published by Ákos Kádár.


Computational Linguistics | 2017

Representation of linguistic form and function in recurrent neural networks

Ákos Kádár; Grzegorz Chrupała; Afra Alishahi

We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture consisting of two parallel pathways with shared word embeddings: The Visual pathway is trained on predicting the representations of the visual scene corresponding to an input sentence, and the Textual pathway is trained to predict the next word in the same sentence. We propose a method for estimating the amount of contribution of individual tokens in the input to the final prediction of the networks. Using this method, we show that the Visual pathway pays selective attention to lexical categories and grammatical functions that carry semantic information, and learns to treat word types differently depending on their grammatical function and their position in the sequential structure of the sentence. In contrast, the language models are comparatively more sensitive to words with a syntactic function. Further analysis of the most informative n-gram contexts for each model shows that in comparison with the Visual pathway, the language models react more strongly to abstract contexts that represent syntactic constructions.


international joint conference on natural language processing | 2015

Learning language through pictures

Grzegorz Chrupała; Ákos Kádár; Afra Alishahi

We propose Imaginet, a model of learning visually grounded representations of language from coupled textual and visual input. The model consists of two Gated Recurrent Unit networks with shared word embeddings, and uses a multi-task objective by receiving a textual description of a scene and trying to concurrently predict its visual representation and the next word in the sentence. Like humans, it acquires meaning representations for individual words from descriptions of visual scenes. Moreover, it learns to effectively use sequential structure in semantic interpretation of multi-word phrases.


empirical methods in natural language processing | 2015

Lingusitic Analysis of Multi-Modal Recurrent Neural Networks

Ákos Kádár; Grzegorz Chrupała; Afra Alishahi

Recurrent neural networks (RNN) have gained a reputation for beating state-of-the-art results on many NLP benchmarks and for learning representations of words and larger linguistic units that encode complex syntactic and semantic structures. However, it is not straight-forward to understand how exactly these models make their decisions. Recently Li et al. (2015) developed methods to provide linguistically motivated analysis for RNNs trained for sentiment analysis. Here we focus on the analysis of a multi-modal Gated Recurrent Neural Network (GRU) architecture trained to predict image-vectors extracted from images using a CNN trained on ImageNet from their corresponding descriptions. We propose two methods to explore the importance of grammatical categories with respect to the model and the task. We observe that the model pays most attention to head-words, noun subjects and adjectival modifiers and least to determiners and coordinations.


international joint conference on natural language processing | 2017

Imagination Improves Multimodal Translation

Desmond Elliott; Ákos Kádár


international conference on learning representations | 2018

FigureQA: An Annotated Figure Dataset for Visual Reasoning

Samira Ebrahimi Kahou; Adam Atkinson; Vincent Michalski; Ákos Kádár; Adam Trischler; Yoshua Bengio


arXiv: Learning | 2018

TextWorld: A Learning Environment for Text-based Games.

Marc-Alexandre Côté; Ákos Kádár; Xingdi Yuan; Ben Kybartas; Tavian Barnes; Emery Fine; James Moore; Matthew J. Hausknecht; Layla El Asri; Mahmoud Adada; Wendy Tay; Adam Trischler


meeting of the association for computational linguistics | 2018

NeuralREG: An end-to-end approach to referring expression generation

Thiago Castro Ferreira; Diego Moussallem; Ákos Kádár; Sander Wubben; Emiel Krahmer


international conference on computational linguistics | 2018

Revisiting the Hierarchical Multiscale LSTM

Ákos Kádár; Marc-Alexandre Côté; Grzegorz Chrupała; Afra Alishahi


international conference on computational linguistics | 2018

DIDEC: The Dutch Image Description and Eye-tracking Corpus

Emiel van Miltenburg; Ákos Kádár; Ruud Koolen; Emiel Krahmer


arXiv: Computation and Language | 2018

Lessons learned in multilingual grounded language learning.

Ákos Kádár; Desmond Elliott; Marc-Alexandre Côté; Grzegorz Chrupała; Afra Alishahi

Collaboration


Dive into the Ákos Kádár's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge