Danilo Giampiccolo
National Institute of Standards and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Danilo Giampiccolo.
meeting of the association for computational linguistics | 2007
Danilo Giampiccolo; Bernardo Magnini; Ido Dagan; Bill Dolan
This paper presents the Third PASCAL Recognising Textual Entailment Challenge (RTE-3), providing an overview of the dataset creating methodology and the submitted systems. In creating this years dataset, a number of longer texts were introduced to make the challenge more oriented to realistic scenarios. Additionally, a pool of resources was offered so that the participants could share common tools. A pilot task was also set up, aimed at differentiating unknown entailments from identified contradictions and providing justifications for overall system decisions. 26 participants submitted 44 runs, using different approaches and generally presenting new entailment models and achieving higher scores than in the previous challenges.
cross language evaluation forum | 2009
Anselmo Peñas; Pamela Forner; Richard F. E. Sutcliffe; Álvaro Rodrigo; Corina Forăscu; Iñaki Alegria; Danilo Giampiccolo; Nicolas Moreau; Petya Osenova
This paper describes the first round of ResPubliQA, a Question Answering (QA) evaluation task over European legislation, proposed at the Cross Language Evaluation Forum (CLEF) 2009. The exercise consists of extracting a relevant paragraph of text that satisfies completely the information need expressed by a natural language question. The general goals of this exercise are (i) to study if the current QA technologies tuned for newswire collections and Wikipedia can be adapted to a new domain (law in this case); (ii) to move to a more realistic scenario, considering people close to law as users, and paragraphs as system output; (iii) to compare current QA technologies with pure Information Retrieval (IR) approaches; and (iv) to introduce in QA systems the Answer Validation technologies developed in the past three years. The paper describes the task in more detail, presenting the different types of questions, the methodology for the creation of the test sets and the new evaluation measure, and analyzing the results obtained by systems and the more successful approaches. Eleven groups participated with 28 runs. In addition, we evaluated 16 baseline runs (2 per language) based only in pure IR approach, for comparison purposes. Considering accuracy, scores were generally higher than in previous QA campaigns.
Theory and Applications of Categories | 2009
Luisa Bentivogli; Peter Clark; Ido Dagan; Danilo Giampiccolo
joint conference on lexical and computational semantics | 2013
Myroslava O. Dzikovska; Rodney D. Nielsen; Chris Brew; Claudia Leacock; Danilo Giampiccolo; Luisa Bentivogli; Peter Clark; Ido Dagan; Hoa Trang Dang
empirical methods in natural language processing | 2011
Matteo Negri; Luisa Bentivogli; Yashar Mehdad; Danilo Giampiccolo; Alessandro Marchetti
joint conference on lexical and computational semantics | 2012
Matteo Negri; Alessandro Marchetti; Yashar Mehdad; Luisa Bentivogli; Danilo Giampiccolo
language resources and evaluation | 2010
Luisa Bentivogli; Elena Cabrio; Ido Dagan; Danilo Giampiccolo; Medea Lo Leggio; Bernardo Magnini
meeting of the association for computational linguistics | 2007
Satoshi Sekine; Kentaro Inui; Ido Dagan; Bill Dolan; Danilo Giampiccolo; Bernardo Magnini
language resources and evaluation | 2012
Anselmo Peñas; Bernardo Magnini; Pamela Forner; Richard F. E. Sutcliffe; Álvaro Rodrigo; Danilo Giampiccolo
language resources and evaluation | 2012
Matteo Negri; Yashar Mehdad; Alessandro Marchetti; Danilo Giampiccolo; Luisa Bentivogli