Gemma Boleda
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gemma Boleda.
international conference on computational linguistics | 2014
Islam Beltagy; Stephen Roller; Gemma Boleda; Katrin Erk; Raymond J. Mooney
We represent natural language semantics by combining logical and distributional information in probabilistic logic. We use Markov Logic Networks (MLN) for the RTE task, and Probabilistic Soft Logic (PSL) for the STS task. The system is evaluated on the SICK dataset. Our best system achieves 73% accuracy on the RTE task, and a Pearson’s correlation of 0.71 on the STS task.
meeting of the association for computational linguistics | 2016
Ionut Sorodoc; Angeliki Lazaridou; Gemma Boleda; Aurélie Herbelot; Sandro Pezzelle; Raffaella Bernardi
In this paper, we investigate whether a neural network model can learn the meaning of natural language quantifiers (no, some and all) from their use in visual contexts. We show that memory networks perform well in this task, and that explicit counting is not necessary to the system’s performance, supporting psycholinguistic evidence on the acquisition of quantifiers.
empirical methods in natural language processing | 2016
Ngoc-Quan Pham; Germán Kruszewski; Gemma Boleda
Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.
empirical methods in natural language processing | 2015
Raffaella Bernardi; Gemma Boleda; Raquel Fernández; Denis Paperno
In this position paper we argue that an adequate semantic model must account for language in use, taking into account how discourse context affects the meaning of words and larger linguistic units. Distributional semantic models are very attractive models of meaning mainly because they capture conceptual aspects and are automatically induced from natural language data. However, they need to be extended in order to account for language use in a discourse or dialogue context. We discuss phenomena that the new generation of distributional semantic models should capture, and propose concrete tasks on which they could be tested.
meeting of the association for computational linguistics | 2012
Elia Bruni; Gemma Boleda; Marco Baroni; Nam Khanh Tran
Archive | 2004
Louise McNally; Gemma Boleda
joint conference on lexical and computational semantics | 2013
Islam Beltagy; Cuong K. Chau; Gemma Boleda; Dan Garrette; Katrin Erk; Raymond J. Mooney
New Journal of Physics | 2013
Francesc Font-Clos; Gemma Boleda; Alvaro Corral
Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -- Long Papers | 2013
Gemma Boleda; Marco Baroni; Louise McNally
joint conference on lexical and computational semantics | 2012
Gemma Boleda; Sebastian Padó; Jason Utt