Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aitor Gonzalez-Agirre is active.

Publication


Featured researches published by Aitor Gonzalez-Agirre.


international conference on computational linguistics | 2014

SemEval-2014 Task 10: Multilingual Semantic Textual Similarity

Eneko Agirre; Carmen Banea; Claire Cardie; Daniel M. Cer; Mona T. Diab; Aitor Gonzalez-Agirre; Weiwei Guo; Rada Mihalcea; German Rigau; Janyce Wiebe

In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.


north american chapter of the association for computational linguistics | 2015

SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability

Eneko Agirre; Carmen Banea; Claire Cardie; Daniel M. Cer; Mona T. Diab; Aitor Gonzalez-Agirre; Weiwei Guo; Iñigo Lopez-Gazpio; Montse Maritxalar; Rada Mihalcea; German Rigau; Larraitz Uria; Janyce Wiebe

In semantic textual similarity (STS), systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new datasets in English and Spanish. The annotations for both subtasks leveraged crowdsourcing. The English subtask attracted 29 teams with 74 system runs, and the Spanish subtask engaged 7 teams participating with 16 system runs. In addition, this year we ran a pilot task on interpretable STS, where the systems needed to add an explanatory layer, that is, they had to align the chunks in the sentence pair, explicitly annotating the kind of relation and the score of the chunk pair. The train and test data were manually annotated by an expert, and included headline and image sentence pairs from previous years. 7 teams participated with 29 runs.


north american chapter of the association for computational linguistics | 2016

SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation

Eneko Agirre; Carmen Banea; Daniel M. Cer; Mona T. Diab; Aitor Gonzalez-Agirre; Rada Mihalcea; German Rigau; Janyce Wiebe

Comunicacio presentada al 10th International Workshop on Semantic Evaluation (SemEval-2016), celebrat els dies 16 i 17 de juny de 2016 a San Diego, California.


north american chapter of the association for computational linguistics | 2016

SemEval-2016 Task 2: Interpretable Semantic Textual Similarity

Eneko Agirre; Aitor Gonzalez-Agirre; Iñigo Lopez-Gazpio; Montse Maritxalar; German Rigau; Larraitz Uria

Comunicacio presentada al 10th International Workshop on Semantic Evaluation (SemEval-2016), celebrat els dies 16 i 17 de juny de 2016 a San Diego, California.


Knowledge Based Systems | 2017

Interpretable semantic textual similarity

Iñigo Lopez-Gazpio; Montse Maritxalar; Aitor Gonzalez-Agirre; German Rigau; Larraitz Uria; Eneko Agirre

We address interpretability, the ability of machines to explain their reasoning.We formalize it for textual similarity as graded typed alignment between 2 sentences.We release an annotated dataset and build and evaluate a high performance system.We show that the output of the system can be used to produce explanations.2 user studies show preliminary evidence that explanations help humans perform better. User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning to the users. We focus on a specific text processing task, the Semantic Textual Similarity task (STS), where systems need to measure the degree of semantic equivalence between two sentences. We propose to add an interpretability layer (iSTS for short) formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. This way, a system performing STS could use the interpretability layer to explain to users why it returned that specific score for the given sentence pair. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop an iSTS system trained on this dataset, which given a sentence pair finds what is similar and what is different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the iSTS system output can be used to automatically produce explanations in natural language. Users performed the two tasks better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations do help users understand the output of STS systems better.


north american chapter of the association for computational linguistics | 2015

UBC: Cubes for English Semantic Textual Similarity and Supervised Approaches for Interpretable STS

Eneko Agirre; Aitor Gonzalez-Agirre; Iñigo Lopez-Gazpio; Montse Maritxalar; German Rigau; Larraitz Uria

In Semantic Textual Similarity, systems rate the degree of semantic equivalence on a graded scale from 0 to 5, with 5 being the most similar. For the English subtask, we present a system which relies on several resources for token-to-token and phrase-to-phrase similarity to build a data-structure which holds all the information, and then combine the information to get a similarity score. We also participated in the pilot on Interpretable STS, where we apply a pipeline which first aligns tokens, then chunks, and finally uses supervised systems to label and score each chunk alignment.


association for information science and technology | 2016

Why are these similar? Investigating item similarity types in a large digital library

Aitor Gonzalez-Agirre; German Rigau; Eneko Agirre; Nikolaos Aletras; Mark Stevenson

We introduce a new problem, identifying the type of relation that holds between a pair of similar items in a digital library. Being able to provide a reason why items are similar has applications in recommendation, personalization, and search. We investigate the problem within the context of Europeana, a large digital library containing items related to cultural heritage. A range of types of similarity in this collection were identified. A set of 1,500 pairs of items from the collection were annotated using crowdsourcing. A high intertagger agreement (average 71.5 Pearson correlation) was obtained and demonstrates that the task is well defined. We also present several approaches to automatically identifying the type of similarity. The best system applies linear regression and achieves a mean Pearson correlation of 71.3, close to human performance. The problem formulation and data set described here were used in a public evaluation exercise, the *SEM shared task on Semantic Textual Similarity. The task attracted the participation of 6 teams, who submitted 14 system runs. All annotations, evaluation scripts, and system runs are freely available.


joint conference on lexical and computational semantics | 2012

SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity

Eneko Agirre; Daniel M. Cer; Mona T. Diab; Aitor Gonzalez-Agirre


joint conference on lexical and computational semantics | 2013

*SEM 2013 shared task: Semantic Textual Similarity

Eneko Agirre; Daniel M. Cer; Mona T. Diab; Aitor Gonzalez-Agirre; Weiwei Guo


language resources and evaluation | 2012

Multilingual Central Repository version 3.0

Aitor Gonzalez-Agirre; Egoitz Laparra; German Rigau

Collaboration


Dive into the Aitor Gonzalez-Agirre's collaboration.

Top Co-Authors

Avatar

German Rigau

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Eneko Agirre

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Iñigo Lopez-Gazpio

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Larraitz Uria

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Montse Maritxalar

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mona T. Diab

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Carmen Banea

University of North Texas

View shared research outputs
Top Co-Authors

Avatar

Janyce Wiebe

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge