Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luc Lamontagne is active.

Publication


Featured researches published by Luc Lamontagne.


canadian conference on artificial intelligence | 2012

Learning observation models for dialogue POMDPs

Hamid R. Chinaei; Brahim Chaib-draa; Luc Lamontagne

The SmartWheeler project aims at developing an intelligent wheelchair for handicapped people. In this paper, we model the dialogue manager of SmartWheeler in MDP and POMDP frameworks using its collected dialogues. First, we learn the model components of the dialogue MDP based on our previous works. Then, we extend the dialogue MDP to a dialogue POMDP, by proposing two observation models learned from dialogues: one based on learned keywords and the other based on learned intentions. The subsequent keyword POMDP and intention POMDP are compared based on accumulated mean reward in simulation runs. Our experimental results show that the quality of the intention model is significantly higher than the keyword one.


international conference on case based reasoning | 2009

Case Retrieval Reuse Net (CR2N): An Architecture for Reuse of Textual Solutions

Ibrahim Adeyanju; Robert Lothian; Somayajulu Sripada; Luc Lamontagne

This paper proposes textual reuse as the identification of reusable textual constructs in a retrieved solution text. This is done by annotating a solution text so that reusable sections are identifiable from those that need revision. We present a novel and generic architecture, Case Retrieval Reuse Net (CR2N), that can be used to generate these annotations to denote text content as reusable or not. Obtaining evidence for and against reuse is crucial for annotation accuracy, therefore a comparative evaluation of different evidence gathering techniques is presented. Evaluation on two domains of weather forecast revision and health & safety incident reporting shows significantly better accuracy over a retrieve-only system and a comparable reuse technique. This also provides useful insight into the text revision stage.


ECCBR '08 Proceedings of the 9th European conference on Advances in Case-Based Reasoning | 2008

Forgetting Reinforced Cases

Houcine Romdhane; Luc Lamontagne

To meet time constraints, a CBR system must control the time spent searching in the case base for a solution. In this paper, we presents the results of a case study comparing the proficiency of some criteria for forgetting cases, hence bounding the number of cases to be explored during retrieval. The criteria being considered are case usage, case value and case density. As we make use of a sequential game for our experiments, case values are obtained through training using reinforcement learning. Our results indicate that case usage is the most favorable criteria for selecting the cases to be forgotten prior to retrieval. We also have some indications that a mixture of case usage and case value can provide some improvements. However compaction of a case base using case density reveals less performing for our application.


international conference on case-based reasoning | 2013

Similarity Measures to Compare Episodes in Modeled Traces

Raafat Zarka; Amélie Cordier; Elöd Egyed-Zsigmond; Luc Lamontagne; Alain Mille

This paper reports on a similarity measure to compare episodes in modeled traces. A modeled trace is a structured record of observations captured from users’ interactions with a computer system. An episode is a sub-part of the modeled trace, describing a particular task performed by the user. Our method relies on the definition of a similarity measure for comparing elements of episodes, combined with the implementation of the Smith-Waterman Algorithm for comparison of episodes. This algorithm is both accurate in terms of temporal sequencing and tolerant to noise generally found in the traces that we deal with. Our evaluations show that our approach offers quite satisfactory comparison quality and response time. We illustrate its use in the context of an application for video sequences recommendation.


Procedia Computer Science | 2015

Predicting Unit Testing Effort Levels of Classes: An Exploratory Study based on Multinomial Logistic Regression Modeling☆

Mourad Badri; Fadel Toure; Luc Lamontagne

Abstract The study aims at investigating empirically the ability of a Quality Assurance Indicator (Qi), a metric that we proposed in a previous work, to predict different levels of unit testing effort of classes in object-oriented systems. To capture the unit testing effort of classes, we used four metrics to quantify various perspectives related to the code of corresponding unit test cases. Classes were classified, according to the involved unit testing effort, in five categories (levels). We collected data from two open source Java software systems (ANT and JFREECHART) for which JUnit test cases exist. In order to explore the ability of the Qi metric to predict different levels of the unit testing effort of classes, we decided to explore the possibility of using the Multinomial Logistic Regression (MLR) method. The performance of the Qi metric has been compared to the performance of three well-known source code metrics related respectively to size, complexity and coupling. Results suggest that the MLR model based on the Qi metric is able to accurately predict different levels of the unit testing effort of classes.


Journal of Software Engineering Research and Development | 2014

A metrics suite for JUnit test code: a multiple case study on open source software

Fadel Toure; Mourad Badri; Luc Lamontagne

BackgroundThe code of JUnit test cases is commonly used to characterize software testing effort. Different metrics have been proposed in literature to measure various perspectives of the size of JUnit test cases. Unfortunately, there is little understanding of the empirical application of these metrics, particularly which metrics are more useful in terms of provided information.MethodsThis paper aims at proposing a unified metrics suite that can be used to quantify the unit testing effort. We addressed the unit testing effort from the perspective of unit test case construction, and particularly the effort involved in writing the code of JUnit test cases. We used in our study five unit test case metrics, two of which were introduced in a previous work. We conducted an empirical study in three main stages. We collected data from six open source Java software systems, of different sizes and from different domains, for which JUnit test cases exist. We performed in a first stage a Principal Component Analysis to find whether the analyzed unit test case metrics are independent or are measuring similar structural aspects of the code of JUnit test cases. We used in a second stage clustering techniques to determine the unit test case metrics that are the less volatile, i.e. the least affected by the style adopted by developers while writing the code of test cases. We used in a third stage correlation and linear regression analysis to evaluate the relationships between the internal software class attributes and the test case metrics.Results and ConclusionsThe main goal of this study was to identify a subset of unit test case metrics: (1) providing useful information on the effort involved to write the code of JUnit test cases, (2) that are independent from each other, and (3) that are the less volatile. Results confirm the conclusions of our previous work and show, in addition, that: (1) the set of analyzed unit test case metrics could be reduced to a subset of two independent metrics maximizing the whole set of provided information, (2) these metrics are the less volatile, and (3) are also the most correlated to the internal software class attributes.


Lecture Notes in Computer Science | 2006

Combining multiple similarity metrics using a multicriteria approach

Luc Lamontagne; Irène Abi-Zeid

The design of a CBR system involves the use of similarity metrics. For many applications, various functions can be adopted to compare case features and to aggregate them into a global similarity measure. Given the availability of multiple similarity metrics, the designer is hence left with two options in order to come up with a working system: Either select one similarity metric or try to combine multiple metrics in a super-metric. In this paper, we study how techniques borrowed from multicriteria decision aid can be applied to CBR for combining the results of multiple similarity metrics. The problem of multi-metrics retrieval is presented as an instance of the problem of ranking alternatives based on multiple attributes. Discrete methods such as ELECTRE II have been proposed by the multicriteria decision aid community to address such situations. We conducted our experiments for ranking cases with ELECTRE II, a procedure based on pairwise comparisons. We used textual cases and multiple metrics. Our results indicate that the use of a combination of metrics with a multicriteria decision aid method can increase retrieval precision and provide an advantage over weighted sum combinations especially when similarity is measured on scales that are different in nature.


international conference on agents and artificial intelligence | 2009

Application of Hidden Topic Markov Models on Spoken Dialogue Systems

Hamid R. Chinaei; Brahim Chaib-draa; Luc Lamontagne

A common problem in spoken dialogue systems is finding the intention of the user. This problem deals with obtaining one or several topics for each transcribed, possibly noisy, sentence of the user. In this work, we apply the recent unsupervised learning method, Hidden Topic Markov Models (HTMM), for finding the intention of the user in dialogues. This technique combines two methods of Latent Dirichlet Allocation (LDA) and Hidden Markov Model (HMM) in order to learn topics of documents. We show that HTMM can be also used for obtaining intentions for the noisy transcribed sentences of the user in spoken dialogue systems. We argue that in this way we can learn possible states in a speech domain which can be used in the design stage of its spoken dialogue system. Furthermore, we discuss that the learned model can be augmented and used in a POMDP (Partially Observable Markov Decision Process) dialogue manager of the spoken dialogue system.


artificial intelligence methodology systems applications | 1998

An agent system for intelligent situation assessment

Qiang Yang; Irène Abi-Zeid; Luc Lamontagne

Coordinating Search and Rescue (SAR) operations is a knowledge and information intensive task. Upon receiving an initial indication about a possible aircraft related problem, a Rescue Coordination Center (RCC) Controller sets out to find out more about the nature of the problem. This situation assessment phase is highly complex due to the diverse and sophisticated nature of the many information sources. In this paper, we present an intelligent agent architecture incorporating multiple, continual information planning agents for assisting the RCC controller in performing critical situation assessment tasks. The agents monitor the actions of the human controller and make their decisions on when and how to acquire more information to help the human controller, and to remind him of important steps that may have been overlooked. The system is designed based on several technologies, including hierarchical task networks, case based retrieval and intelligent agent systems.


Infor | 2011

A Constraint Optimization Approach for the Allocation of Multiple Search Units in Search and Rescue Operations

Irène Abi-Zeid; Oscar Nilo; Luc Lamontagne

Abstract Search and Rescue (SAR) comprises the search for and provision of aid to persons who are, or who are feared to be, in distress or in imminent danger of loss of life. Time is a crucial factor for survivors who must be found quickly and search planning may get complex in the case of a large search area and multiple search resources. The problem we address in this paper is that of defining and assigning multiple non-overlapping rectangular sub-areas to search units (search aircraft) such that the search plan is operationally feasible and the total probability of success is maximized. We present algorithms we developed for the search resources allocation problem for aeronautical SAR incidents when multiple indivisible searchers are present. These algorithms are based on classical search theory and on constraint programming. We assume that the search effort is continuous and measured by track length, that the search object is stationary and that search is conducted in discrete space. We present experimental results for a realistic SAR case overland.

Collaboration


Dive into the Luc Lamontagne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fadel Toure

Université du Québec

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge