Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandro Pezzelle is active.

Publication


Featured researches published by Sandro Pezzelle.


meeting of the association for computational linguistics | 2016

The LAMBADA dataset: Word prediction requiring a broad discourse context

Denis Paperno; Germán Kruszewski; Angeliki Lazaridou; Ngoc Quan Pham; Raffaella Bernardi; Sandro Pezzelle; Marco Baroni; Gemma Boleda; Raquel Fernández

We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.


meeting of the association for computational linguistics | 2016

“Look, some green circles!”: learning to quantify from images

Ionut Sorodoc; Angeliki Lazaridou; Gemma Boleda; Aurélie Herbelot; Sandro Pezzelle; Raffaella Bernardi

In this paper, we investigate whether a neural network model can learn the meaning of natural language quantifiers (no, some and all) from their use in visual contexts. We show that memory networks perform well in this task, and that explicit counting is not necessary to the system’s performance, supporting psycholinguistic evidence on the acquisition of quantifiers.


meeting of the association for computational linguistics | 2016

Building a Bagpipe with a Bag and a Pipe: Exploring Conceptual Combination in Vision.

Sandro Pezzelle; Ravi Shekhar; Raffaella Bernardi

This preliminary study investigates whether, and to what extent, conceptual combination is conveyed by vision. Working with noun-noun compounds we show that, for some cases, the composed visual vector built with a simple additive model is effective in approximating the visual vector representing the complex concept.


Archive | 2016

Imparare a quantificare guardando

Sandro Pezzelle; Ionut Sorodoc; Aurélie Herbelot; Raffaella Bernardi

English. In this paper, we focus on linguistic questions over images which may be answered with a quantifier (e.g. How many dogs are black? Some/most/all of them, etc.). We show that in order to learn to quantify, a multimodal model has to obtain a genuine understanding of linguistic and visual inputs and of their interaction. We propose a model that extracts a fuzzy representation of the set of the queried objects (e.g. dogs) and of the queried property in relation to that set (e.g. black with respect to dogs), outputting the appropriate quantifier for that relation. Italiano. In questo lavoro studiamo le domande del tipo “Quanti cani sono neri?”, la cui risposta è un quantificatore (es. “alcuni”/”tutti”/”nessuno”). Mostriamo che al fine di imparare a quantificare, un modello multimodale deve ottenere una rappresentazione profonda della domanda linguistica, dell’immagine e della loro interazione. Proponiamo un modello che estrae una rappresentazione approssimativa dell’insieme degli oggetti e della proprietà sui quali verte la domanda.


meeting of the association for computational linguistics | 2017

FOIL it! Find One mismatch between Image and Language caption

Ravi Shekhar; Sandro Pezzelle; Yauhen Klimovich; Aurélie Herbelot; Moin Nabi; Enver Sangineto; Raffaella Bernardi


north american chapter of the association for computational linguistics | 2018

COMPARATIVES, QUANTIFIERS, PROPORTIONS: A MULTI-TASK MODEL FOR THE LEARNING OF QUANTITIES FROM VISION

Sandro Pezzelle; Ionut-Teodor Sorodoc; Raffaella Bernardi


meeting of the association for computational linguistics | 2018

Some of them can Be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers

Sandro Pezzelle; Shane Steinert-Threlkeld; Raffaella Bernardi; Jakub Szymanik


arXiv: Computer Vision and Pattern Recognition | 2018

The Wisdom of MaSSeS: Majority, Subjectivity, and Semantic Similarity in the Evaluation of VQA

Shailza Jolly; Sandro Pezzelle; Tassilo Klein; Andreas Dengel; Moin Nabi


Natural Language Engineering | 2018

Learning quantification from images: A structured neural architecture

Ionut Sorodoc; Sandro Pezzelle; Aurélie Herbelot; M. Dimiccoli; Raffaella Bernardi


Cognition | 2018

Probing the mental representation of quantifiers

Sandro Pezzelle; Raffaella Bernardi; Manuela Piazza

Collaboration


Dive into the Sandro Pezzelle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gemma Boleda

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge