Younès Bennani
University of Paris
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Younès Bennani.
international conference on acoustics, speech, and signal processing | 1991
Younès Bennani; Patrick Gallinari
The authors propose a novel model for text-independent talker identification which uses TDNN (time-delay neural network) extracted feature information. This model has been tested on 20 speakers (10 male and 10 female) from the TIMIT database using an LPC (linear predictive coding) parameterization. An average identification of 98% was observed.<<ETX>>
annual mediterranean ad hoc networking workshop | 2005
Lahcène Dehni; Francine Krief; Younès Bennani
The use of the wireless sensor networks (WSNs) should be increasing in different fields. However, the sensor’s size is an important limitation in term of energetic autonomy, and thus of lifetime because battery must be very small. This is the reason why, today, research mainly carries on the energy management in the WSNs, taking into account communications, essentially. In this context, we compare different clustering methods used in the WSNs, particularly EECS, with an adaptive routing algorithm that we named LEA2C. This algorithm is based on topological self-organizing maps. We obtain important gains in term of energy and thus of network lifetime.
Speech Communication | 1995
Younès Bennani; Patrick Gallinari
Abstract This article reviews current research on neural network systems for speaker recognition tasks. We consider two main approaches, the first one relies on direct classification and the second on speaker modelization. The potential of connectionist models for speaker recognition is first presented and the main models are briefly introduced. We then present different systems which have been recently proposed for speaker recognition tasks. We discuss their respective performances and potentials and compare these techniques to more conventional methods like vector quantization and Hidden Markov models. The paper ends with a summary and suggestions for further developments.
international symposium on neural networks | 2009
Nistor Grozavu; Younès Bennani; Mustapha Lebbah
We introduce a new learning approach, which provides simultaneously Self-Organizing Map (SOM) and local weight vector for each cluster. The proposed approach is computationally simple, and learns a different features vector weights for each cell (relevance vector). Based on the Self-Organizing Map approach, we present two new simultaneously clustering and weighting algorithms: local weighting observation lwo-SOM and local weighting distance lwd-SOM. Both algorithms achieve the same goal by minimizing different cost functions. After learning phase, a selection method with weight vectors is used to prune the irrelevant variables and thus we can characterize the clusters. We illustrate the performance of the proposed approach using different data sets. A number of synthetic and real data are experimented on to show the benefits of the proposed local weighting using self-organizing models.
international symposium on neural networks | 2007
Mustapha Lebbah; Nicoleta Rogovschi; Younès Bennani
This paper introduces a probabilistic self-organizing map for clustering, analysis and visualization of multivariate binary data. We propose a probabilistic formalism dedicated to binary data in which cells are represented by a Bernoulli distribution. Each cell is characterized by a prototype with the same binary coding as used in the data space and the probability of being different from this prototype. The learning algorithm, BeSOM, that we propose is an application of the EM standard algorithm. We illustrate the power of this method with two data sets taken from a public data set repository: a handwritten digit data set and a zoo data set. The results show a good quality of the topological ordering and homogenous clustering.
international conference on acoustics, speech, and signal processing | 1990
Younès Bennani; Françoise Fogelman Soulie; Patrick Gallinari
A connectionist approach to automatic speaker identification based on the learning vector quantization (VQ) algorithm is presented. For each adherent to the identification system, a number of references is fixed. The algorithm is based on a nearest-neighbor principle, with adaptation through learning. The identification is realized by comparing to a given threshold the distance of the unknown utterance to the nearest reference. Preliminary tests run on a ten-speaker set show an identification rate of 97% for MFC coefficients. The identification system and database used and the results obtained for different combinations of parameters are given. The system is evaluated by comparing its performances with a Bayesian system.<<ETX>>
international symposium on neural networks | 2008
Guénaël Cabanes; Younès Bennani
Determining the optimum number of clusters is an ill posed problem for which there is no simple way of knowing that number without a priori knowledge. The purpose of this paper is to provide a simultaneous two-level clustering algorithm based on self organizing map, called DS2L-SOM, which learn at the same time the structure of the data and its segmentation. The algorithm is based both on distance and density measures in order to accomplish a topographic clustering. An important feature of the algorithm is that the cluster number is discovered automatically. A great advantage of the proposed algorithm, compared to the common partitional clustering methods, is that it is not restricted to convex clusters but can recognize arbitrarily shaped clusters and touching clusters. The validity and the stability of this algorithm are superior to standard two-level clustering methods such as SOM+K-means and SOM+hierarchical agglomerative clustering. This is demonstrated on a set of critical clustering problems.
international conference on machine learning and applications | 2007
Guénaël Cabanes; Younès Bennani
One of the most crucial questions in many real-world cluster applications is determining a suitable number of clusters, also known as the model selection problem. Determining the optimum number of clusters is an ill posed problem for which there is no simple way of knowing that number without a priori knowledge. In this paper we propose a new two-level clustering algorithm based on self organizing map, called S2L-SOM, which allows an automatic determination of the number of clusters during learning. Estimating true numbers of clusters is related to the cluster stability which involved the validity of clusters generated by the learning algorithm. To measure this stability we use the sub-sampling method. The great advantage of our proposed algorithm, compared to the common partitional clustering methods, is that it is not restricted to convex clusters but can recognize arbitrarily shaped clusters. The validity of this algorithm is superior to standard two-level clustering methods such as SOM+k-means and SOM+Hierarchical agglomerative clustering. This is demonstrated on a set of critical clustering problems.
Neural Computation | 1995
Younès Bennani
This paper presents and evaluates a modular/hybrid connectionist system for speaker identification. Modularity has emerged as a powerful technique for reducing the complexity of connectionist systems, and allowing a priori knowledge to be incorporated into their design. Text-independent speaker identification is an inherently complex task where the amount of training data is often limited. It thus provides an ideal domain to test the validity of the modular/hybrid connectionist approach. To achieve such identification, we develop, in this paper, an architecture based upon the cooperation of several connectionist modules, and a Hidden Markov Model module. When tested on a population of 102 speakers extracted from the DARPA-TIMIT database, perfect identification was obtained.
Information Fusion | 2018
Antoine Cornuéjols; Cédric Wemmert; Pierre Gançarski; Younès Bennani
Abstract Clustering is one type of unsupervised learning where the goal is to partition the set of objects into groups called clusters. Faced to the difficulty to design a general purpose clustering algorithm and to choose a good, let alone perfect, set of criteria for clustering a data set, one solution is to resort to a variety of clustering procedures based on different techniques, parameters and/or initializations, in order to construct one (or several) final clustering(s). The hope is that by combining several clustering solutions, each one with its own bias and imperfections, one will get a better overall solution. In the cooperative clustering model, as Ensemble Clustering, a set of clustering algorithms are used in parallel on a given data set: the local results are combined to get a hopefully better overall clustering. In the collaborative framework, the goal is that each local computation, quite possibly applied to distinct data sets, benefit from the work done by the other collaborators. This paper is dedicated to collaborative clustering. In particular, after a brief overview of clustering and the major issues linked to, it presents main challenges related to organize and control the collaborative process.