Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mathieu Serrurier is active.

Publication


Featured researches published by Mathieu Serrurier.


data and knowledge engineering | 2007

Learning fuzzy rules with their implication operators

Mathieu Serrurier; Didier Dubois; Henri Prade; Thomas Sudkamp

Fuzzy predicates have been incorporated into machine learning and data mining to extend the types of data relationships that can be represented, to facilitate the interpretation of rules in linguistic terms, and to avoid unnatural boundaries in partitioning attribute domains. The confidence of an association is classically measured by the co-occurrence of attributes in tuples in the database. The semantics of fuzzy rules, however, is not co-occurrence but rather graduality or certainty and is determined by the implication operator that defines the rule. In this paper we present a learning algorithm, based on inductive logic programming, that simultaneously learns the semantics and evaluates the validity of fuzzy rules. The learning algorithm selects the implication that maximizes rule confidence while trying to be as informative as possible. The use of inductive logic programming increases the expressive power of fuzzy rules while maintaining their linguistic interpretability.


european conference on principles of data mining and knowledge discovery | 2003

Enriching Relational Learning with Fuzzy Predicates

Henri Prade; Gilles Richard; Mathieu Serrurier

The interest of introducing fuzzy predicates when learning rules is twofold. When dealing with numerical data, it enables us to avoid arbitrary discretization. Moreover, it enlarges the expressive power of what is learned by considering different types of fuzzy rules, which may describe gradual behaviors of related attributes or uncertainty pervading conclusions. This paper describes different types of first-order fuzzy rules and a method for learning each type. Finally, we discuss the interest of each type of rules on a benchmark example.


Autonomous Agents and Multi-Agent Systems | 2008

Agents that argue and explain classifications

Mathieu Serrurier

Argumentation is a promising approach used by autonomous agents for reasoning about inconsistent/incomplete/uncertain knowledge, based on the construction and the comparison of arguments. In this paper, we apply this approach to the classification problem, whose purpose is to construct from a set of training examples a model that assigns a class to any new example. We propose a formal argumentation-based model that constructs arguments in favor of each possible classification of an example, evaluates them, and determines among the conflicting arguments the acceptable ones. Finally, a “valid” classification of the example is suggested. Thus, not only the class of the example is given, but also the reasons behind that classification are provided to the user as well in a form that is easy to grasp. We show that such an argumentation-based approach for classification offers other advantages, like for instance classifying examples even when the set of training examples is inconsistent, and considering more general preference relations between hypotheses. In the particular case of concept learning, the results of version space theory developed by Mitchell are retrieved in an elegant way in our argumentation framework. Finally, we show that the model satisfies the rationality postulates identified in argumentation literature. This ensures that the model delivers sound results.


inductive logic programming | 2004

A Simulated Annealing Framework for ILP

Mathieu Serrurier; Henri Prade; Gilles Richard

In Inductive Logic Programming (ILP), algorithms which are purely of the bottom-up or top-down type encounter several problems in practice. Since a majority of them are greedy ones, these algorithms find clauses in local optima, according to the “quality” measure used for evaluating the results. Moreover, when learning clauses one by one, induced clauses become less interesting to cover few remaining examples. In this paper, we propose a simulated annealing framework to overcome these problems. Using a refinement operator, we define neighborhood relations on clauses and on hypotheses (i.e. sets of clauses). With these relations and appropriate quality measures, we show how to induce clauses (in a coverage approach), or to induce hypotheses directly by using simulated annealing algorithms. We discuss the necessary conditions on the refinement operators and the evaluation measures in order to increase the algorithm’s effectivity. Implementations are described and experimentation results are presented.


Artificial Intelligence | 2007

Introducing possibilistic logic in ILP for dealing with exceptions

Mathieu Serrurier; Henri Prade

In this paper we propose a new formalization of the inductive logic programming (ILP) problem for a better handling of exceptions. It is now encoded in first-order possibilistic logic. This allows us to handle exceptions by means of prioritized rules, thus taking lessons from non-monotonic reasoning. Indeed, in classical first-order logic, the exceptions of the rules that constitute a hypothesis accumulate and classifying an example in two different classes, even if one is the right one, is not correct. The possibilistic formalization provides a sound encoding of non-monotonic reasoning that copes with rules with exceptions and prevents an example to be classified in more than one class. The benefits of our approach with respect to the use of first-order decision lists are pointed out. The possibilistic logic view of ILP problem leads to an optimization problem at the algorithmic level. An algorithm based on simulated annealing that in one turn computes the set of rules together with their priority levels is proposed. The reported experiments show that the algorithm is competitive to standard ILP approaches on benchmark examples.


International Journal of Approximate Reasoning | 2013

An informational distance for estimating the faithfulness of a possibility distribution, viewed as a family of probability distributions, with respect to data

Mathieu Serrurier; Henri Prade

An acknowledged interpretation of possibility distributions in quantitative possibility theory is in terms of families of probabilities that are upper and lower bounded by the associated possibility and necessity measures. This paper proposes an informational distance function for possibility distributions that agrees with the above-mentioned view of possibility theory in the continuous and in the discrete cases. Especially, we show that, given a set of data following a probability distribution, the optimal possibility distribution with respect to our informational distance is the distribution obtained as the result of the probability-possibility transformation that agrees with the maximal specificity principle. It is also shown that when the optimal distribution is not available due to representation bias, maximizing this possibilistic informational distance provides more faithful results than approximating the probability distribution and then applying the probability-possibility transformation. We show that maximizing the possibilistic informational distance is equivalent to minimizing the squared distance to the unknown optimal possibility distribution. Two advantages of the proposed informational distance function is that (i) it does not require the knowledge of the shape of the probability distribution that underlies the data, and (ii) it amounts to sum up the elementary terms corresponding to the informational distance between the considered possibility distribution and each piece of data. We detail the particular case of triangular and trapezoidal possibility distributions and we show that any unimodal unknown probability distribution can be faithfully upper approximated by a triangular distribution obtained by optimizing the possibilistic informational distance.


soft computing | 2006

Improving Expressivity of Inductive Logic Programming by Learning Different Kinds of Fuzzy Rules

Mathieu Serrurier; Henri Prade

Introducing fuzzy predicates in inductive logic programming may serve two different purposes: allowing for more adaptability when learning classical rules or getting more expressivity by learning fuzzy rules. This latter concern is the topic of this paper. Indeed, introducing fuzzy predicates in the antecedent and in the consequent of rules may convey different non-classical meanings. The paper focuses on the learning of gradual and certainty rules, which have an increased expressive power and have no simple crisp counterpart. The benefit and the application domain of each kind of rules are discussed. Appropriate confidence degrees for each type of rules are introduced. These confidence degrees play a major role in the adaptation of the classical FOIL inductive logic programming algorithm to the induction of fuzzy rules for guiding the learning process. The method is illustrated on a benchmark example and a case-study database.


international syposium on methodologies for intelligent systems | 2009

Elicitation of Sugeno Integrals: A Version Space Learning Perspective

Henri Prade; Agnès Rico; Mathieu Serrurier

Sugeno integrals can be viewed as multiple criteria aggregation functions which take into account a form of synergy between criteria. As such, Sugeno integrals constitute an important family of tools for modeling qualitative preferences defined on ordinal scales. The elicitation of Sugeno integrals starts from a set of data that associates a global evaluation assessment to situations described by multiple criteria values. A consistent set of data corresponds to a non-empty family of Sugeno integrals with which the data are compatible. This elicitation process presents some similarity with the revision process underlying the version space approach in concept learning, when new data are introduced. More precisely, the elicitation corresponds to a graded extension of version space learning, recently proposed in the framework of bipolar possibility theory. This paper establishes the relation between these two formal settings.


international conference information processing | 2012

Possibilistic KNN Regression Using Tolerance Intervals

Mohammad Ghasemi Hamed; Mathieu Serrurier; Nicolas Durand

By employing regression methods minimizing predictive risk, we are usually looking for precise values which tends to their true response value. However, in some situations, it may be more reasonable to predict intervals rather than precise values. In this paper, we focus to find such intervals for the K-nearest neighbors (KNN) method with precise values for inputs and output. In KNN, the prediction intervals are usually built by considering the local probability distribution of the neighborhood. In situations where we do not dispose of enough data in the neighborhood to obtain statistically significant distributions, we would rather wish to build intervals which takes into account such distribution uncertainties. For this latter we suggest to use tolerance intervals to build the maximal specific possibility distribution that bounds each population quantiles of the true distribution (with a fixed confidence level) that might have generated our sample set. Next we propose a new interval regression method based on KNN which take advantage of our possibility distribution in order to choose, for each instance, the value of K which will be a good trade-off between precision and uncertainty due to the limited sample size. Finally we apply our method on an aircraft trajectory prediction problem.


ieee international conference on fuzzy systems | 2007

A general framework for imprecise regression

Mathieu Serrurier; Henri Prade

Many studies on machine learning, and more specifically on regression, focus on the search for a precise model, when precise data are available. Therefore, it is well-known that the model thus found may not exactly describe the target concept, due to the existence of learning bias. In order to overcome the problem of too much illusionary precise models, this paper provides a general framework for imprecise regression from non-fuzzy input and output data. The goal of imprecise regression is to find a model that has the better tradeoff between faithfulness w.r.t. data and (meaningful) precision. We propose an algorithm based on simulated annealing for linear and non-linear imprecise regression with triangular and trapezoidal fuzzy sets. This approach is compared with the different fuzzy regression frameworks, especially with possibilistic regression. Experiments on an environmental database show promising results.

Collaboration


Dive into the Mathieu Serrurier's collaboration.

Top Co-Authors

Avatar

Henri Prade

University of Toulouse

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Durand

École nationale de l'aviation civile

View shared research outputs
Top Co-Authors

Avatar

Mohammad Ghasemi Hamed

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammad Ghasemi Hamed

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

David Gianazza

École nationale de l'aviation civile

View shared research outputs
Top Co-Authors

Avatar

Didier Dubois

Paul Sabatier University

View shared research outputs
Top Co-Authors

Avatar

Nicolas Hug

Paul Sabatier University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge