Elisa Noguera
University of Alicante
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elisa Noguera.
international conference natural language processing | 2005
Antonio Toral; Elisa Noguera; Fernando Llopis; Rafael Muñoz
This paper studies the use of Named Entity Recognition (NER) for the Question Anwering (QA) task in Spanish texts. NER applied as a preprocessing step not only helps to detect the answer to the question but also decreases the amount of data to be considered by QA. Our proposal reduces a 26% the quantity of data and moreover increases a 9% the efficiency of the system.
text speech and dialogue | 2005
Elisa Noguera; Antonio Toral; Fernando Llopis; Rafael Muñoz
In a previous paper we proved that Named Entity Recognition plays an important role to improve Question Answering by both increasing the quality of the data and by reducing its quantity. Here we present a more in-depth discussion, studying several ways in which NER can be applied in order to produce a maximum data reduction. We achieve a 60% reduction without significant data loss and a 92.5% with a reasonable implication in data quality.
cross language evaluation forum | 2006
Sergio Ferrández; Pilar López-Moreno; Sandra Roger; Antonio Ferrández; Jesús Peral; X. Alvarado; Elisa Noguera; Fernando Llopis
A previous version of AliQAn participated in the CLEF 2005 Spanish Monolingual Question Answering task. For this years run, to the system are added new and representative patterns in question analysis and extraction of the answer. A new ontology of question types has been included. The inexact questions have been improved. The information retrieval engine has been modified considering only the detected keywords from the question analysis module. Besides, many PoS Tagger and SUPAR errors have been solved and finally, dictionaries about cities and countries have been incorporated. To deal with Cross-Lingual tasks, we employ the BRILI system. The achieved results are overall accuracy of 37.89% for monolingual and 21.58% for bilingual tasks.
cross language evaluation forum | 2005
Fernando Llopis; Elisa Noguera
The paper describes our participation in the monolingual tasks at CLEF 2005. We submitted results for the following languages: French, Portuguese, Bulgarian and Hungarian, using a passage retrieval system. We focused on a version of this system that combines passages of different size to improve retrieval performance. After an analysis of our experiments and of the official results at CLEF, we find that our passage retrieval combination model achieves considerably improved scores.
cross language evaluation forum | 2004
Fernando Llopis; Rafael Muñoz; Rafael M. Terol; Elisa Noguera
This paper describes the fourth participation of the IR-n system (Alicante University) at the CLEF evaluation campaigns. For CLEF 2004, we modified our similarity measure and query expansion model. For the similarity measure, we now use normalization based on the number of words for each of the passages. We tested two different approaches for query expansion: the first one is based on documents and the second on passages.
cross language evaluation forum | 2005
Óscar Ferrández; Zornitsa Kozareva; Antonio Toral; Elisa Noguera; Andrés Montoyo; Rafael Muñoz; Fernando Llopis
For our participation in GeoCLEF 2005 we have developed a system made up of three modules. One of them is an Information Retrieval module and the others are Named Entity Recognition modules based on machine learning and based on knowledge. We have carried out several runs with different combinations of these modules for resolving the proposed tasks. The system scored second position for the tasks against German collections and third position for the tasks against English collections.
cross language evaluation forum | 2005
Borja Navarro; Lorenza Moreno-Monteagudo; Elisa Noguera; Sonia Vázquez; Fernando Llopis; Andrés Montoyo
The main topic of this paper is the context size needed for an efficient Interactive Cross-language Question Answering system. We compare two approaches: the first one (baseline system) shows the user whole passages (maximum context: 10 sentences). The second one (experimental system) shows only a clause (minimum context). As cross-language system, the main problem is that the language of the question (Spanish) and the language of the answer context (English) are different. The results show that large context is better. However, there are specific relations between the context size and the knowledge about the language of the answer: users with poor level of English prefer context with few words.
cross language evaluation forum | 2008
Elisa Noguera; Fernando Llopis
The paper describes our participation in the Monolingual tasks at CLEF 2007. We submitted results for the following languages: Hungarian, Bulgarian and Czech. We focused on studying different query expansion techniques: Probabilistic Relevance Feedback (PRF) and Mutual Information Relevance Feedback (MI-RF) to improve retrieval performance. After an analysis of our experiments and of the official results at CLEF 2007, we achieved considerably improved scores by using query expansion techniques for different languages (Hungarian, Bulgarian and Czech).
text speech and dialogue | 2007
Elisa Noguera; Fernando Llopis; Antonio Ferrández; Alberto Escapa
Previous works on evaluating the performance of Question Answering (QA) systems are focused on the evaluation of the precision. In this paper, we developed a mathematic procedure in order to explore new evaluation measures in QA systems considering the answer time. Also, we carried out an exercise for the evaluation of QA systems within a time constraint in the CLEF-2006 campaign, using the proposed measures. The main conclusion is that the evaluation of QA systems in realtime can be a new scenario for the evaluation of QA systems.
advanced information networking and applications | 2007
Elisa Noguera; Fernando Llopis; Antonio Ferrández; Alberto Escapa
Previous works on evaluating the performance of Question Answering (QA) systems are focused in the evaluation of the precision. Nevertheless, the importance of the answer time never has been evaluated. This paper studies the problem of the evaluation in QA systems, focusing on the way in which the time can be considered. Specifically, we carried out an exercise for the evaluation of QA systems within a time constraint in the CLEF-2006 campaign, proposing new measures which combine the precision with the answer time. Also, the performance achieved by each participant system and statistical analysis of the results is given. The main conclusion is that the evaluation of QA systems in realtime can be a more realistic scenario than the traditional used for the main evaluations forums in QA.