Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pamela Forner is active.

Publication


Featured researches published by Pamela Forner.


cross language evaluation forum | 2008

Overview of the CLEF 2005 multilingual question answering track

Pamela Forner; Anselmo Peñas; Eneko Agirre; Iñaki Alegria; Corina Forăscu; Nicolas Moreau; Petya Osenova; Prokopis Prokopidis; Paulo Rocha; Bogdan Sacaleanu; Richard F. E. Sutcliffe; Erik F. Tjong Kim Sang

The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise. Overall results showed a general increase in performance in comparison to last year. The best performing monolingual system irrespective of target language answered 64.5% of the questions correctly (in the monolingual Portuguese task), while the average of the best performances for each target language was 42.6%. The cross-language step instead entailed a considerable drop in performance. In addition to accuracy, the organisers also measured the relation between the correctness of an answer and a system’s stated confidence in it, showing that the best systems did not always provide the most reliable confidence score. We provide an overview of the 2005 QA track, detail the procedure followed to build the test sets and present a general analysis of the results.


MLR '04 Proceedings of the Workshop on Multilingual Linguistic Ressources | 2004

Revising the wordnet domains hierarchy: semantics, coverage and balancing

Luisa Bentivogli; Pamela Forner; Bernardo Magnini; Emanuele Pianta

The continuous expansion of the multilingual information society has led in recent years to a pressing demand for multilingual linguistic resources suitable to be used for different applications. In this paper we present the WordNet Domains Hierarchy (WDH), a language-independent resource composed of 164, hierarchically organized, domain labels (e.g. Architecture, Sport, Medicine). Although WDH has been successfully applied to various Natural Language Processing tasks, the first available version presented some problems, mostly related to the lack of a clear semantics of the domain labels. Other correlated issues were the coverage and the balancing of the domains. We illustrate a new version of WDH addressing these problems by an explicit and systematic reference to the Dewey Decimal Classification. The new version of WDH has a better defined semantics and is applicable to a wider range of tasks.


cross language evaluation forum | 2009

Overview of ResPubliQA 2009: question answering evaluation over European legislation

Anselmo Peñas; Pamela Forner; Richard F. E. Sutcliffe; Álvaro Rodrigo; Corina Forăscu; Iñaki Alegria; Danilo Giampiccolo; Nicolas Moreau; Petya Osenova

This paper describes the first round of ResPubliQA, a Question Answering (QA) evaluation task over European legislation, proposed at the Cross Language Evaluation Forum (CLEF) 2009. The exercise consists of extracting a relevant paragraph of text that satisfies completely the information need expressed by a natural language question. The general goals of this exercise are (i) to study if the current QA technologies tuned for newswire collections and Wikipedia can be adapted to a new domain (law in this case); (ii) to move to a more realistic scenario, considering people close to law as users, and paragraphs as system output; (iii) to compare current QA technologies with pure Information Retrieval (IR) approaches; and (iv) to introduce in QA systems the Answer Validation technologies developed in the past three years. The paper describes the task in more detail, presenting the different types of questions, the methodology for the creation of the test sets and the new evaluation measure, and analyzing the results obtained by systems and the more successful approaches. Eleven groups participated with 28 runs. In addition, we evaluated 16 baseline runs (2 per language) based only in pure IR approach, for comparison purposes. Considering accuracy, scores were generally higher than in previous QA campaigns.


Lecture Notes in Computer Science | 2012

Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics

Tiziana Catarci; Pamela Forner; Djoerd Hiemstra; Anselmo Peñas; Giuseppe Santucci

This book constitutes the proceedings of the Third International Conference of the CLEF Initiative, CLEF 2012, held in Rome, Italy, in September 2012. The 14 papers and 3 poster abstracts presented were carefully reviewed and selected for inclusion in this volume. Furthermore, the books contains 2 keynote papers. The papers are organized in topical sections named: benchmarking and evaluation initiatives; information access; and evaluation methodologies and infrastructure.


cross language evaluation forum | 2013

QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation

Anselmo Peñas; Eduard H. Hovy; Pamela Forner; Álvaro Rodrigo; Richard F. E. Sutcliffe; Roser Morante

This paper describes the methodology for testing the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. This was the attempt of the QA4MRE challenge which was run as a Lab at CLEF 2011---2013. The traditional QA task was replaced by a new Machine Reading task, whose intention was to ask questions that required a deep knowledge of individual short texts and in which systems were required to choose one answer, by analysing the corresponding test document in conjunction with background text collections provided by the organization. Four different tasks have been organized during these years: Main Task, Processing Modality and Negation for Machine Reading, Machine Reading of Biomedical Texts about Alzheimers disease, and Entrance Exams. This paper describes their motivation, their goals, their methodology for preparing the data sets, their background collections, their metrics used for the evaluation, and the lessons learned along these three years.


Archive | 2010

Proceedings of LREC

Diana Santos; L.M. Cabral; Corina Forascu; Pamela Forner; Fredric C. Gey; Katrin Lamm; Thomas Mandl; Petya Osenova; A. Peñas; Álvaro Rodrigo; Julia Maria Schulz; Y. Skalban; E.F. Tjong Kim Sang


CLEF (Online Working Notes/Labs/Workshop) | 2011

Overview of QA4MRE at CLEF 2011: Question Answering for Machine Reading Evaluation

Anselmo Peñas; Eduard H. Hovy; Pamela Forner; Álvaro Rodrigo; Richard F. E. Sutcliffe; Caroline Sporleder; Corina Forascu; Yassine Benajiba; Petya Osenova


language resources and evaluation | 2012

Question answering at the cross-language evaluation forum 2003---2010

Anselmo Peñas; Bernardo Magnini; Pamela Forner; Richard F. E. Sutcliffe; Álvaro Rodrigo; Danilo Giampiccolo


CLEF (Working Notes) | 2013

Overview of QA4MRE Main Task at CLEF 2013

Richard F. E. Sutcliffe; Anselmo Peñas; Eduard H. Hovy; Pamela Forner; Álvaro Rodrigo; Corina Forascu; Yassine Benajiba; Petya Osenova


CLEF (Working Notes) | 2013

Overview of QA4MRE 2013 Entrance Exams Task.

Anselmo Peñas; Yusuke Miyao; Eduard H. Hovy; Pamela Forner; Noriko Kando

Collaboration


Dive into the Pamela Forner's collaboration.

Top Co-Authors

Avatar

Anselmo Peñas

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar

Álvaro Rodrigo

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Petya Osenova

Bulgarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Corina Forascu

Alexandru Ioan Cuza University

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Santucci

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Danilo Giampiccolo

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eduard H. Hovy

National Institute of Informatics

View shared research outputs
Researchain Logo
Decentralizing Knowledge