Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anne Vilnat is active.

Publication


Featured researches published by Anne Vilnat.


web intelligence | 2007

Lexical validation of answers in Question Answering

Anne-Laure Ligozat; Brigitte Grau; Anne Vilnat; Isabelle Robba; Arnaud Grappy

Question answering (QA) aims at retrieving precise information from a large collection of documents, typically the Web. Different techniques can be used to find relevant information, and to compare these techniques, it is important to evaluate question answering systems. The objective of an Answer Validation task is to estimate the correctness of an answer returned by a QA system for a question, according to the text snippet given to support it. In this article, we present a lexical strategy for deciding if the snippets justify the answers, based on our own question answering system. We discuss our results, and show the possible extensions of our strategy.


web intelligence | 2011

Selecting Answers to Questions from Web Documents by a Robust Validation Process

Arnaud Grappy; Brigitte Grau; Mathieu-Henri Falco; Anne-Laure Ligozat; Isabelle Robba; Anne Vilnat

Question answering (QA) systems aim at finding answers to question posed in natural language using a collection of documents. When the collection is extracted from the Web, the structure and style of the texts are quite different from those of newspaper articles. We developed a QA system based on an answer validation process able to handle Web specificity. A large number of candidate answers are extracted from short passages in order to be validated according to question and passages characteristics. The validation module is based on a machine learning approach. It takes into account criteria characterizing both passage and answer relevance at surface, lexical, syntactic and semantic levels to deal with different types of texts. We present and compare results obtained for factual questions posed on a Web and on a newspaper collection. We show that our system outperforms a baseline by up to 48% in MRR.


conference of the european chapter of the association for computational linguistics | 2003

PEAS, the first instantiation of a comparative framework for evaluating parsers of French

Véronique Gendner; Gabriel Illouz; Michèle Jardino; Laura Monceaux; Patrick Paroubek; Isabelle Robba; Anne Vilnat

This paper presents PEAS, the first comparative evaluation framework for parsers of French whose annotation formalism allows the annotation of both constituents and functional relations. A test corpus containing an assortment of different text types has been built and part of it has been manually annotated. Precision/Recall and crossing brackets metrics will be adapted to our formalism and applied to the parses produced by one parser from academia and another one from industry in order to validate the framework.


MLQA '06 Proceedings of the Workshop on Multilingual Question Answering | 2006

Evaluation and improvement of cross-lingual question answering strategies

Anne-Laure Ligozat; Brigitte Grau; Isabelle Robba; Anne Vilnat

This article presents a bilingual question answering system, which is able to process questions and documents both in French and in English. Two cross-lingual strategies are described and evaluated. First, we study the contribution of biterms translation, and the influence of the completion of the translation dictionaries. Then, we propose a strategy for transferring the question analysis from one language to the other, and we study its influence on the performance of our system.


international conference on tools with artificial intelligence | 2007

Towards an Automatic Validation of Answers in Question Answering

Anne-Laure Ligozat; Brigitte Grau; Anne Vilnat; Isabelle Robba; Arnaud Grappy

Question answering (QA) aims at retrieving precise information from a large collection of documents. Different techniques can be used to find relevant information, and to compare these techniques, it is important to evaluate QA systems. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system for a question, according to the text snippet given to support it. We participated in such a task in 2006. In this article, we present our strategy for deciding if the snippets justify the answers: a strategy based on our own QA system, comparing the answers it returned with the answer to judge. We discuss our results, then we point out the difficulties of this task.


international conference natural language processing | 2010

Comparison of paraphrase acquisition techniques on sentential paraphrases

Houda Bouamor; Aurélien Max; Anne Vilnat

In this article, the task of acquisition of subsentential paraphrases is discussed and several automatic techniques are presented. We describe an evaluation methodology to compare these techniques and some of their combinations. This methodology is applied on two corpora of sentential paraphrases obtained by multiple translations. The conclusions that are drawn can be used to guide future work for improving existing techniques.


Archive | 2008

Coping With Alternate Formulations Of Questions And Answers

Brigitte Grau; Olivier Ferret; Martine Hurault-Plantet; Christian Jacquemin; Laura Monceaux; Isabelle Robba; Anne Vilnat

We present in this chapter the QALC system which has participated in the four TREC QA evaluations. We focus here on the problem of linguistic variation in order to be able to relate questions and answers. We present first, variation at the term level which consists in retrieving questions terms in document sentences even if morphologic, syntactic or semantic variations alter them. Our second subject matter concerns variation at the sentence level that we handle as different partial reformulations of questions. Questions are associated with extraction patterns based on the question syntactic type and the object that is under query. We present the whole system thus allowing situating how QALC deals with variation, and different evaluations.


cross language evaluation forum | 2006

The bilingual system MUSCLEF at QA@CLEF 2006

Brigitte Grau; Anne-Laure Ligozat; Isabelle Robba; Anne Vilnat; Michael Bagur; Kevin Séjourné

This paper presents our bilingual question answering system MUSCLEF. We underline the difficulties encountered when shifting from a mono to a cross-lingual system, then we focus on the evaluation of three modules of MUSCLEF: question analysis, answer extraction and answer fusion. We finally present how we re-used different modules of MUSCLEF to participate in AVE (Answer Validation Exercise).


cross language evaluation forum | 2005

Term translation validation by retrieving bi-terms

Brigitte Grau; Anne-Laure Ligozat; Isabelle Robba; Anne Vilnat

For our second participation to the Question Answering task of CLEF, we kept last year’s system named MUSCLEF, which uses two different translation strategies implemented in two modules. The multilingual module MUSQAT analyzes the French questions, translates “interesting parts”, and then uses these translated terms to search the reference collection. The second strategy consists in translating the question into English and applying QALC our existing English module. Our purpose in this paper is to analyze term translations and propose a mechanism for selecting correct ones. The manual evaluation of bi-terms translations leads us to the conclusion that the bi-term translations found in the corpus can confirm the mono-term translations.


computational intelligence | 1988

A question-answering system for the French Yellow Pages

P. Herman; Gérard Sabah; Anne Vilnat

This paper describes a dialogue‐based system which is intended as an intelligent natural language interface to the French Yellow Pages. We do not assume that the user knows how the Yellow Pages are organized, and we paraphrase his request, if necessary, so as to better search for the desired information. We do, however, assume that the reason the user is on line is to find an address and phone number for some supplier.

Collaboration


Dive into the Anne Vilnat's collaboration.

Top Co-Authors

Avatar

Isabelle Robba

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Brigitte Grau

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Laura Monceaux

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anne-Laure Ligozat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martine Hurault-Plantet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Houda Bouamor

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Véronique Moriceau

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Houda Bouamor

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge