Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leila Kosseim is active.

Publication


Featured researches published by Leila Kosseim.


computational linguistics in the netherlands | 2000

Proper Name Extraction from Non-Journalistic Texts

Thierry Poibeau; Leila Kosseim

This paper discusses the influence of the corpus on the automatic identification of proper names in texts. Techniques developed for the newswire genre are generally not sufficient to deal with larger corpora containing texts that do not follow strict writing constraints (for example, e-mail messages, transcriptions of oral conversations, etc). After a brief review of the research performed on news texts, we present some of the problems involved in the analysis of two different corpora: e-mails and hand-transcribed telephone conversations. Once the sources of errors have been presented, we then describe an approach to adapt a proper name extraction system developed for newspaper texts to the analysis of e-mail


canadian conference on artificial intelligence | 2003

Answer formulation for question-answering

Leila Kosseim; Luc Plamondon; Louis-Julien Guillemette

In this paper, we describe our experimentations in evaluating answer formulation for question-answering (QA) systems. In the context of QA, answer formulation can serve two purposes: improving answer extraction or improving human-computer interaction (HCI). Each purpose has different precision/recall requirements. We present our experiments for both purposes and argue that formulations of better grammatical quality are beneficial for both answer extraction and HCI.


computational intelligence | 2000

CHOOSING RHETORICAL STRUCTURES TO PLAN INSTRUCTIONAL TEXTS

Leila Kosseim; Guy Lapalme

This paper discusses a fundamental problem in natural language generation: how to organize the content of a text in a coherent and natural way. In this research, we set out to determine the semantic content and the rhetorical structure of texts and to develop heuristics to perform this process automatically within a text generation framework. The study was performed on a specific language and textual genre: French instructional texts. From a corpus analysis of these texts, we determined nine senses typically communicated in instructional texts and seven rhetorical relations used to present these senses. From this analysis, we then developed a set of presentation heuristics that determine how the senses to be communicated should be organized rhetorically in order to create a coherent and natural text. The heuristics are based on five types of constraints: conceptual, semantic, rhetorical, pragmatic, and intentional constraints. To verify the heuristics, we developed the spin natural language generation system, which performs all steps of text generation but focuses on the determination of the content and the rhetorical structure of the text.


data and knowledge engineering | 2005

Using semantic templates for a natural language interface to the CINDI virtual library

Niculae Stratica; Leila Kosseim; Bipin C. Desai

In this paper, we present our work in building a template-based system for translating English sentences into SQL queries for a relational database system. The input sentences are syntactically parsed using the Link Parser, and semantically parsed through the use of domain-specific templates. The system is composed of a pre-processor and a run-time module. The pre-processor builds a conceptual knowledge base from the database schema using WordNet. This knowledge base is then used at run time to semantically parse the input and create the corresponding SQL query. The system is meant to be domain independent and has been tested with the CINDI database that contains information on a virtual library.


Fourth international workshop on Software quality assurance | 2007

Toward a text classification system for the quality assessment of software requirements written in natural language

Olga Ormandjieva; Ishrar Hussain; Leila Kosseim

Requirements Engineering (RE) is concerned with the gathering, analyzing, specifying and validating of user requirements that are documented mostly in natural language. The artifact produced by the RE process is the software requirements specification (SRS) document. The success of a software project largely depends on the quality of SRS documentation, which serves as an input to the design, coding and testing phases. This paper approaches the problem of the automatic quality assessment of textual requirements from an innovative point of view, namely the use of the Natural Language Processing (NLP) text classification technique. The paper proposes a quality model for the requirements text and a text classification system to automate the quality assessment process. A large study evaluating the discriminatory power of the quality characteristics and the feasibility of a tool for the automatic detection of ambiguities in requirements documentation is presented. The study also provides a benchmark for such an evaluation and an upper bound on what we can expect automatic requirements quality assessment tools to achieve. The reported research is part of a larger project on the applicability of NLP techniques to assess the quality of artifacts produced in RE.


data and knowledge engineering | 2008

Improving the performance of question answering with semantically equivalent answer patterns

Leila Kosseim; Jamileh Yousefi

In this paper, we discuss a novel technique based on semantic constraints to improve the performance and portability of a reformulation-based question answering system. First, we present a method for acquiring semantic-based reformulations automatically. The goal is to generate patterns from sentences retrieved from the Web based on lexical, syntactic and semantic constraints. Once these constraints have been defined, we present a method to evaluate and re-rank candidate answers that satisfy these constraints using redundancy. The two approaches have been evaluated independently and in combination. The evaluation on 493 questions from TREC-11 shows that the automatically acquired semantic patterns increase the MRR by 26%, the re-ranking using semantic redundancy increases the MRR by 67%, and the two approaches combined increase the MRR by 73%. This new technique allows us to avoid the manual work of formulating semantically equivalent reformulations; while still increasing performance.


data and knowledge engineering | 2001

Using information extraction and natural language generation to answer e-mail

Leila Kosseim; Stéphane Beauregard; Guy Lapalme

This paper discusses the use of information extraction and natural language generation in the design of an automated e-mail answering system.We analyse short free-form texts and generating a customised and linguistically-motivated answer to frequently asked questions.We describe the approach and the design of a system currently being developed to answer e-mail in French regarding printer-related questions addressed to the technical support staff of our computer science department.


applications of natural language to data bases | 2008

Using Linguistic Knowledge to Classify Non-functional Requirements in SRS documents

Ishrar Hussain; Leila Kosseim; Olga Ormandjieva

Non-functional Requirements (NFRs) such as software quality attributes, software design constraints and software interface requirements hold crucial information about the constraints on the software system under development and its behavior. NFRs are subjective in nature and have a broad impact on the system as a whole. Being distinct from Functional Requirements (FR), NFRs are dealt with special attention, as they play an integral role during software modeling and development. However, since Software Requirements Specification (SRS) documents, in practice, are written in natural language, solely holding the perspectives of the clients, the documents often end up with FR and NFR statements mixed together in the same paragraphs. It is, therefore, left upon the software analysts to classify and separate them manually. The research, presented in this paper, aims to automate the process of detecting NFR sentences by using a text classifier equipped with a part-of-speech (POS) tagger. The results reported in this paper outperform the recent work in the field, and achieved a higher accuracy of 98.56% using 10-folds-cross-validation over the same data used in the literature. The research reported in this paper is part of a larger project aimed at applying Natural Language Processing techniques in Software Requirements Engineering.


data and knowledge engineering | 2013

Approximation of COSMIC functional size to support early effort estimation in Agile

Ishrar Hussain; Leila Kosseim; Olga Ormandjieva

The demands in the software industry of estimating development effort in the early phases of development are met by measuring software size from user requirements. A large number of companies have adapted themselves with Agile processes, which, although, promise rapid software development, pose a huge burden on the development teams for continual decision making and expert judgement, when estimating the size of the software components to be developed at each iteration. COSMIC, on the other hand, is an ISO/IEC international standard that presents an objective method of measuring the functional size of the software from user requirements. However, its measurement process is not compatible with Agile processes, as COSMIC requires user requirements to be formalised and decomposed at a level of granularity where external interactions with the system are visible to the human measurer. This time-consuming task is avoided by agile processes, leaving it with the only option of quick subjective judgement by human measurers for size measurement that often tends to be erroneous. In this article, we address these issues by presenting an approach to approximate COSMIC functional size from informally written textual requirements demonstrating its applicability in popular agile processes. We also discuss the results of a preliminary experiment studying the feasibility of automating our approach using supervised text mining.


natural language generation | 1994

Content and rhetorical status selection in instructional texts

Leila Kosseim; Guy Lapalme

This paper discusses an approach to planning the content of instructional texts. The research is based on a corpus study of 15 French procedural texts ranging from step-by-step device manuals to general artistic procedures. The approach taken starts from an AI task planner building a task representation, from which semantic carriers are selected. The most appropriate RST relations to communicate these carriers are then chosen according to heuristics developed during the corpus analysis.

Collaboration


Dive into the Leila Kosseim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guy Lapalme

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge