Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Isabelle Robba is active.

Publication


Featured researches published by Isabelle Robba.


meeting of the association for computational linguistics | 1998

Reference Resolution beyond Coreference: a Conceptual Frame and its Application

Andrei Popescu-Belis; Isabelle Robba; Gérard Sabah

A model for reference use in communication is proposed, from a representationist point of view. Both the sender and the receiver of a message handle representations of their common environment, including mental representations of objects. Reference resolution by a computer is viewed as the construction of object representations using referring expressions from the discourse, whereas often only coreference links between such expressions are looked for. Differences between these two approaches are discussed.The model has been implemented with elementary rules, and tested on complex narrative texts (hundreds to thousands of referring expressions). The results support the mental representations paradigm.


web intelligence | 2007

Lexical validation of answers in Question Answering

Anne-Laure Ligozat; Brigitte Grau; Anne Vilnat; Isabelle Robba; Arnaud Grappy

Question answering (QA) aims at retrieving precise information from a large collection of documents, typically the Web. Different techniques can be used to find relevant information, and to compare these techniques, it is important to evaluate question answering systems. The objective of an Answer Validation task is to estimate the correctness of an answer returned by a QA system for a question, according to the text snippet given to support it. In this article, we present a lexical strategy for deciding if the snippets justify the answers, based on our own question answering system. We discuss our results, and show the possible extensions of our strategy.


web intelligence | 2011

Selecting Answers to Questions from Web Documents by a Robust Validation Process

Arnaud Grappy; Brigitte Grau; Mathieu-Henri Falco; Anne-Laure Ligozat; Isabelle Robba; Anne Vilnat

Question answering (QA) systems aim at finding answers to question posed in natural language using a collection of documents. When the collection is extracted from the Web, the structure and style of the texts are quite different from those of newspaper articles. We developed a QA system based on an answer validation process able to handle Web specificity. A large number of candidate answers are extracted from short passages in order to be validated according to question and passages characteristics. The validation module is based on a machine learning approach. It takes into account criteria characterizing both passage and answer relevance at surface, lexical, syntactic and semantic levels to deal with different types of texts. We present and compare results obtained for factual questions posed on a Web and on a newspaper collection. We show that our system outperforms a baseline by up to 48% in MRR.


arXiv: Computation and Language | 1997

Cooperation between pronoun and reference resolution for unrestricted texts

Andrei Popescu-Belis; Isabelle Robba

Anaphora resolution is envisaged in this paper as part of the reference resolution process. A general open architecture is proposed, which can be particularized and configured in order to simulate some classic anaphora resolution methods. With the aim of improving pronoun resolution, the system takes advantage of elementary cues about characters of the text, which are represented through a particular data structure. In its most robust configuration, the system uses only a general lexicon, a local morphosyntactic parser and a dictionary of synonyms. A short comparative corpus analysis shows that narrative texts are the most suitable for testing such a system.


conference of the european chapter of the association for computational linguistics | 2003

PEAS, the first instantiation of a comparative framework for evaluating parsers of French

Véronique Gendner; Gabriel Illouz; Michèle Jardino; Laura Monceaux; Patrick Paroubek; Isabelle Robba; Anne Vilnat

This paper presents PEAS, the first comparative evaluation framework for parsers of French whose annotation formalism allows the annotation of both constituents and functional relations. A test corpus containing an assortment of different text types has been built and part of it has been manually annotated. Precision/Recall and crossing brackets metrics will be adapted to our formalism and applied to the parses produced by one parser from academia and another one from industry in order to validate the framework.


MLQA '06 Proceedings of the Workshop on Multilingual Question Answering | 2006

Evaluation and improvement of cross-lingual question answering strategies

Anne-Laure Ligozat; Brigitte Grau; Isabelle Robba; Anne Vilnat

This article presents a bilingual question answering system, which is able to process questions and documents both in French and in English. Two cross-lingual strategies are described and evaluated. First, we study the contribution of biterms translation, and the influence of the completion of the translation dictionaries. Then, we propose a strategy for transferring the question analysis from one language to the other, and we study its influence on the performance of our system.


international conference on tools with artificial intelligence | 2007

Towards an Automatic Validation of Answers in Question Answering

Anne-Laure Ligozat; Brigitte Grau; Anne Vilnat; Isabelle Robba; Arnaud Grappy

Question answering (QA) aims at retrieving precise information from a large collection of documents. Different techniques can be used to find relevant information, and to compare these techniques, it is important to evaluate QA systems. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system for a question, according to the text snippet given to support it. We participated in such a task in 2006. In this article, we present our strategy for deciding if the snippets justify the answers: a strategy based on our own QA system, comparing the answers it returned with the answer to judge. We discuss our results, then we point out the difficulties of this task.


Archive | 2008

Coping With Alternate Formulations Of Questions And Answers

Brigitte Grau; Olivier Ferret; Martine Hurault-Plantet; Christian Jacquemin; Laura Monceaux; Isabelle Robba; Anne Vilnat

We present in this chapter the QALC system which has participated in the four TREC QA evaluations. We focus here on the problem of linguistic variation in order to be able to relate questions and answers. We present first, variation at the term level which consists in retrieving questions terms in document sentences even if morphologic, syntactic or semantic variations alter them. Our second subject matter concerns variation at the sentence level that we handle as different partial reformulations of questions. Questions are associated with extraction patterns based on the question syntactic type and the object that is under query. We present the whole system thus allowing situating how QALC deals with variation, and different evaluations.


cross language evaluation forum | 2006

The bilingual system MUSCLEF at QA@CLEF 2006

Brigitte Grau; Anne-Laure Ligozat; Isabelle Robba; Anne Vilnat; Michael Bagur; Kevin Séjourné

This paper presents our bilingual question answering system MUSCLEF. We underline the difficulties encountered when shifting from a mono to a cross-lingual system, then we focus on the evaluation of three modules of MUSCLEF: question analysis, answer extraction and answer fusion. We finally present how we re-used different modules of MUSCLEF to participate in AVE (Answer Validation Exercise).


cross language evaluation forum | 2005

Term translation validation by retrieving bi-terms

Brigitte Grau; Anne-Laure Ligozat; Isabelle Robba; Anne Vilnat

For our second participation to the Question Answering task of CLEF, we kept last year’s system named MUSCLEF, which uses two different translation strategies implemented in two modules. The multilingual module MUSQAT analyzes the French questions, translates “interesting parts”, and then uses these translated terms to search the reference collection. The second strategy consists in translating the question into English and applying QALC our existing English module. Our purpose in this paper is to analyze term translations and propose a mechanism for selecting correct ones. The manual evaluation of bi-terms translations leads us to the conclusion that the bi-term translations found in the corpus can confirm the mono-term translations.

Collaboration


Dive into the Isabelle Robba's collaboration.

Top Co-Authors

Avatar

Anne Vilnat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Brigitte Grau

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Laura Monceaux

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anne-Laure Ligozat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martine Hurault-Plantet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Olivier Ferret

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnaud Grappy

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Gaël de Chalendar

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge