Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edith Grall-Maës is active.

Publication


Featured researches published by Edith Grall-Maës.


IEEE Transactions on Signal Processing | 2002

Mutual information-based feature extraction on the time-frequency plane

Edith Grall-Maës; Pierre Beauseroy

A method is proposed for automatic extraction of effective features for class separability. It applies to nonstationary processes described only by sample sets of stochastic signals. The extraction is based on time-frequency representations (TFRs) that are potentially suited to the characterization of nonstationarities. The features are defined by parameterized mappings applied to a TFR. These mappings select a region of the time-frequency plane by using a two-dimensional (2-D) parameterized weighting function and provide a standard characteristic in the restricted representation obtained. The features are automatically drawn from the TFR by tuning the weighting function parameters. The extraction is driven to maximize the information brought by the features about the class membership. It uses a mutual information criterion, based on estimated probability distributions. The framework is developed for the extraction of a single feature and extended to several features. A classification scheme adapted to the extracted features is proposed. Finally, some experimental results are given to demonstrate the efficacy of the method.


Neurocomputing | 2014

Multi-task learning with one-class SVM

Xiyan He; Gilles Mourot; Didier Maquin; José Ragot; Pierre Beauseroy; André Smolarz; Edith Grall-Maës

Multi-task learning technologies have been developed to be an effective way to improve the generalization performance by training multiple related tasks simultaneously. The determination of the relatedness between tasks is usually the key to the formulation of a multi-task learning method. In this paper, we make the assumption that when tasks are related to each other, usually their models are close enough, that is, their models or their model parameters are close to a certain mean function. Following this task relatedness assumption, two multi-task learning formulations based on one-class support vector machines (one-class SVM) are presented. With the help of new kernel design, both multi-task learning methods can be solved by the optimization program of a single one-class SVM. Experiments conducted on both low-dimensional nonlinear toy dataset and high-dimensional textured images show that our approaches lead to very encouraging results.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Optimal Decision Rule with Class-Selective Rejection and Performance Constraints

Edith Grall-Maës; Pierre Beauseroy

The problem of defining a decision rule which takes into account performance constraints and class-selective rejection is formalized in a general framework. In the proposed formulation, the problem is defined using three kinds of criteria. The first is the cost to be minimized, which defines the objective function, the second are the decision options, determined by the admissible assignment classes or subsets of classes, and the third are the performance constraints. The optimal decision rule within the statistical decision theory framework is obtained by solving the stated optimization problem. Two examples are provided to illustrate the formulation and the decision rule is obtained.


international conference on acoustics, speech, and signal processing | 2006

Multilabel Classification Rule with Performance Constraints

Edith Grall-Maës; Pierre Beauseroy; Abdenour Bounsiar

A formulation for multilabel and performance constraints classification problems is presented within the framework of statistical decision theory. The definition of the problem takes into account three concerns. The first is the cost function which defines the criterion to minimize; the second is the decision options which are defined by the admissible assignment classes or subsets of classes and the third one is the constraints of performance. Assuming that the conditional probability density functions are known, the classification rule that is solution of the stated problem is expounded. Two examples are provided to illustrate the formulation and the decision rule obtained


Pattern Recognition Letters | 2008

General solution and learning method for binary classification with performance constraints

Abdenour Bounsiar; Pierre Beauseroy; Edith Grall-Maës

In this paper, the problem of binary classification is studied with one or two performance constraints. When the constraints cannot be satisfied, the initial problem has no solution and an alternative problem is solved by introducing a rejection option. The optimal solution for such problems in the framework of statistical hypothesis testing is shown to be based on likelihood ratio with one or two thresholds depending on whether it is necessary to introduce a rejection option or not. These problems are then addressed when classes are only defined by labelled samples. To illustrate the resolution of cases with and without rejection option, the problem of Neyman-Pearson and the one of minimizing reject probability subject to a constraint on error probability are studied. Solutions based on SVMs and on a kernel based classifier are experimentally compared and discussed.


BioMed Research International | 2009

Gene-based multiclass cancer diagnosis with class-selective rejections.

Nisrine Jrad; Edith Grall-Maës; Pierre Beauseroy

Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers.


international conference on pattern recognition | 2008

Supervised learning rule selection for multiclass decision with performance constraints

Nisrine Jrad; Edith Grall-Maës; Pierre Beauseroy

A procedure to select a supervised rule for multiclass problem from a labeled dataset is proposed. The rule allows class-selective rejection and performance constraints. The unknown probabilities are estimated with a Parzen estimator. A set of rules are built by varying the Parzen¿s smoothness parameter of the marginal probabilities estimates and plugging them into the statistical hypothesis rules. A criterion that assesses the quality of these rules is estimated and used to select a rule. Resampling and aggregation methods are used to show the efficiency of the estimated criterion.


international conference on machine learning and applications | 2008

A Supervised Decision Rule for Multiclass Problems Minimizing a Loss Function

Nisrine Jrad; Edith Grall-Maës; Pierre Beauseroy

A multiclass learning method which minimizes a loss function is proposed. The loss function is defined by costs associated to the decision options which may include classes, subsets of classes if partial rejection is considered and all classes if total rejection is introduced. A formulation of the general problem is given, a decision rule which is based on the v-1-SVMs trained on each class is defined and a learning method is proposed. This latter optimizes all the v-1-SVM parameters and all the decision rule parameters jointly in order to minimize the loss function. To extend the search space of the v-1-SVM parameters and keep the processing time under control, the v-1-SVM regularization path is derived for each class and used during the learning process. Experimental results on artificial data sets and some benchmark data sets are provided to assess the effectiveness of the approach.


international conference on pattern recognition applications and methods | 2016

Assessing the Number of Clusters in a Mixture Model with Side-information

Edith Grall-Maës; Duc Tung Dao

This paper deals with the selection of cluster number in a clustering problem taking into account the sideinformation that some points of a chunklet arise from a same cluster. An Expectation-Maximization algorithm is used to estimate the parameters of a mixture model and determine the data partition. To select the number of clusters, usual criteria are not suitable because they do not consider the side-information in the data. Thus we propose suitable criteria which are modified version of three usual criteria, the bayesian information criterion (BIC), the Akaike information criterion (AIC), and the entropy criterion (NEC). The proposed criteria are used to select the number of clusters in the case of two simulated problems and one real problem. Their performances are compared and the influence of the chunklet size is discussed.


international conference on machine learning and applications | 2015

Linear KernelPCA and K-Means Clustering Using New Estimated Eigenvectors of the Sample Covariance Matrix

Nassara Elhadji Ille Gado; Edith Grall-Maës; Malika Kharouf

In this article, random matrix theory is used to propose a new K-means clustering algorithm via linear PCA. Our approach is devoted to linear PCA estimation when the number of the features d and the number of samples n go to infinity at the same rate. More precisely, we deal with the problem of building a consistent estimator of the eigenvectors of the covariance data matrix. Numerical results, based on the normalized mutual information (NMI) and the final error rate (ER), are provided and support our algorithm, even for a small number of features/samples. We also compare our approach to spectral clustering, K-means and traditional PCA methods.

Collaboration


Dive into the Edith Grall-Maës's collaboration.

Top Co-Authors

Avatar

Pierre Beauseroy

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nisrine Jrad

University of Technology of Troyes

View shared research outputs
Top Co-Authors

Avatar

Abdenour Bounsiar

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Antoine Grall

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Duc Tung Dao

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Hichem Snoussi

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nicolas Chrysanthos

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge