Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-François Connolly is active.

Publication


Featured researches published by Jean-François Connolly.


Information Sciences | 2012

An adaptive classification system for video-based face recognition

Jean-François Connolly; Eric Granger; Robert Sabourin

In many practical applications, new information may emerge from the environment at different points in time after a classification system has originally been deployed. For instance, in biometric systems, new data may be acquired and used to enroll or to update knowledge of an individual. In this paper, an adaptive classification system (ACS) is proposed for video-based face recognition. It combines a fuzzy ARTMAP neural network classifier, dynamic particle swarm optimization (DPSO) algorithm, and a long term memory (LTM). A novel DPSO-based learning strategy is also presented for incremental learning of new data with this ACS. This strategy allows to cojointly optimize the classifier weights, architecture, and user-defined hyperparameters such as classification rate is maximized. Performance of this system is assessed in terms of classification rate and resource requirements for incremental learning of data blocks coming from real-world video data bases. The necessity of a LTM to store validation data is shown empirically for different enrollment and update scenarios. In addition, incremental learning is shown to constitute a dynamic optimization problem where the optimal hyperparameter values change in time. Simulation results indicate that the proposed system can provide a significant higher classification rate than that of fuzzy ARTMAP alone during incremental learning. However, optimization of ACS parameters requires more resources. The ACS needs several training sequences to produce the optimal solution, and adapting fuzzy ARTMAP parameters according to classification rate tends to require more category neurons and training epochs.


Pattern Recognition | 2012

Evolution of heterogeneous ensembles through dynamic particle swarm optimization for video-based face recognition

Jean-François Connolly; Eric Granger; Robert Sabourin

In many real-world applications, pattern recognition systems are designed a priori using limited and imbalanced data acquired from complex changing environments. Since new reference data often becomes available during operations, performance could be maintained or improved by adapting these systems through supervised incremental learning. To avoid knowledge corruption and sustain a high level of accuracy over time, an adaptive multiclassifier system (AMCS) may integrate information from diverse classifiers that are guided by a population-based evolutionary optimization algorithm. In this paper, an incremental learning strategy based on dynamic particle swarm optimization (DPSO) is proposed to evolve heterogeneous ensembles of classifiers (where each classifier corresponds to a particle) in response to new reference samples. This new strategy is applied to video-based face recognition, using an AMCS that consists of a pool of fuzzy ARTMAP (FAM) neural networks for classification of facial regions, and a niching version of DPSO that optimizes all FAM parameters such that the classification rate is maximized. Given that diversity within a dynamic particle swarm is correlated with diversity within a corresponding pool of base classifiers, DPSO properties are exploited to generate and evolve diversified pools of FAM classifiers, and to efficiently select ensembles among the pools based on accuracy and particle swarm diversity. Performance of the proposed strategy is assessed in terms of classification rate and resource requirements under different incremental learning scenarios, where new reference data is extracted from real-world video streams. Simulation results indicate the DPSO strategy provides an efficient way to evolve ensembles of FAM networks in an AMCS. Maintaining particle diversity in the optimization space yields a level of accuracy that is comparable to AMCS using reference ensemble-based and batch learning techniques, but requires significantly lower computational complexity than assessing diversity among classifiers in the feature or decision spaces.


Applied Soft Computing | 2013

Dynamic multi-objective evolution of classifier ensembles for video face recognition

Jean-François Connolly; Eric Granger; Robert Sabourin

Due to a limited control over changing operational conditions and personal physiology, systems used for video-based face recognition are confronted with complex and changing pattern recognition environments. Although a limited amount of reference data is initially available during enrollment, new samples often become available over time, through re-enrollment, post analysis and labeling of operational data, etc. Adaptive multi-classifier systems (AMCSs) are therefore desirable for the design and incremental update of facial models. For real time recognition of individuals appearing in video sequences, facial regions are captured with one or more cameras, and an AMCS must perform fast and efficient matching against the facial model of individual enrolled to the system. In this paper, an incremental learning strategy based on particle swarm optimization (PSO) is proposed to efficiently evolve heterogeneous classifier ensembles in response to new reference data. This strategy is applied to an AMCS where all parameters of a pool of fuzzy ARTMAP (FAM) neural network classifiers (i.e., a swarm of classifiers), each one corresponding to a particle, are co-optimized such that both error rate and network size are minimized. To provide a high level of accuracy over time while minimizing the computational complexity, the AMCS integrates information from multiple diverse classifiers, where learning is guided by an aggregated dynamical niching PSO (ADNPSO) algorithm that optimizes networks according both these objectives. Moreover, pools of FAM networks are evolved to maintain (1) genotype diversity of solutions around local optima in the optimization search space and (2) phenotype diversity in the objective space. Accurate and low cost ensembles are thereby designed by selecting classifiers on the basis of accuracy, and both genotype and phenotype diversity. For proof-of-concept validation, the proposed strategy is compared to AMCSs where incremental learning of FAM networks is guided through mono- and multi-objective optimization. Performance is assessed in terms of video-based error rate and resource requirements under different incremental learning scenarios, where new data is extracted from real-world video streams (IIT-NRC and MoBo). Simulation results indicate that the proposed strategy provides a level of accuracy that is comparable to that of using mono-objective optimization and reference face recognition systems, yet requires a fraction of the computational cost (between 16% and 20% of a mono-objective strategy depending on the data base and scenario).


congress on evolutionary computation | 2010

An adaptive ensemble of fuzzy ARTMAP neural networks for video-based face classification

Jean-François Connolly; Eric Granger; Robert Sabourin

A key feature in population based optimization algorithms is the ability to explore a search space and make a decision based on multiple solutions. In this paper, an incremental learning strategy based on a dynamic particle swarm optimization (DPSO) algorithm allows to produce heterogeneous ensembles of classifiers for video-based face recognition. This strategy is applied to an adaptive classification system (ACS) comprised of a swarm of fuzzy ARTMAP (FAM) neural network classifiers, a DPSO algorithm, and a long term memory (LTM). The performance of this ACS with an ensemble of FAM networks selected among local bests of the swarm, is compared to that of the ACS with the global best network under different incremental learning scenarios. Performance is assessed in terms of classification rate and resource requirements for incremental learning of new data blocks extracted from real-world video streams, and are given along with reference kNN and FAM classifier optimized for batch learning. Simulation results indicate that the learning strategy maintains diversity within the ensemble classifiers, providing a significantly higher classification rate than that of the best FAM network alone. However, classification with an ensemble requires more resources.


artificial neural networks in pattern recognition | 2008

Supervised Incremental Learning with the Fuzzy ARTMAP Neural Network

Jean-François Connolly; Eric Granger; Robert Sabourin

Automatic pattern classifiers that allow for on-line incremental learning can adapt internal class models efficiently in response to new information without retraining from the start using all training data and without being subject to catastrophic forgeting. In this paper, the performance of the fuzzy ARTMAP neural network for supervised incremental learning is compared to that of supervised batch learning. An experimental protocole is presented to assess this networks potential for incremental learning of new blocks of training data, in terms of generalization error and resource requirements, using several synthetic pattern recognition problems. The advantages and drawbacks of training fuzzy ARTMAP incrementally are assessed for different data block sizes and data set structures. Overall results indicate that error rate of fuzzy ARTMAP is significantly higher when it is trained through incremental learning than through batch learning. As the size of training blocs decreases, the error rate acheived through incremental learning grows, but provides a more compact network using fewer training epochs. In the cases where the class distributions overlap, incremental learning shows signs of over-training. With a growing numbers of training patterns, the error rate grows while the compression reaches a plateau.


computational intelligence and security | 2009

Incremental adaptation of fuzzy ARTMAP neural networks for video-based face classification

Jean-François Connolly; Eric Granger; Robert Sabourin

In many practical applications, new training data is acquired at different points in time, after a classification system has originally been trained. For instance, in face recognition systems, new training data may become available to enroll or to update knowledge of an individual. In this paper, a neural network classifier applied to video-based face recognition is adapted through supervised incremental learning of real-world video data. A training strategy based on particle swarm optimization is employed to co-optimize the weights, architecture and hyperparameters of the fuzzy ARTMAP network during incremental learning of new data. The performance of fuzzy ARTMAP is compared under different class update scenarios when incremental learning is performed according to 3 cases-(A) hyperparameters set to standard values, (B) hyperparameters optimized only at the beginning of the learning process with all classes, and (C) hyperparameters re-optimized whenever new training data becomes available. Overall results indicate that when samples from each individual enrolled to the system are employed for optimization, a higher classification rate is achieved and the solutions produced are more robust to variations caused by pattern presentation order. When all classes are refined equally, this is true with incremental learning according to case (C), whereas, if one class is refined at a time, best performance is obtained with case (B). However, optimizing hyperparameters requires more resources: several training sequences are needed to find the optimal solution and fuzzy ARTMAP with hyperparameters optimized according to classification rate tends to generate a high number of category nodes over longer convergence time.


international symposium on neural networks | 2008

A comparison of fuzzy ARTMAP and Gaussian ARTMAP neural networks for incremental learning

Eric Granger; Jean-François Connolly; Robert Sabourin

Automatic pattern classifiers that allow for incremental learning can adapt internal class models efficiently in response to new information, without having to retrain from the start using all the cumulative training data. In this paper, the performance of two such classifiers - the fuzzy ARTMAP and Gaussian ARTMAP neural networks - are characterize and compared for supervised incremental learning in environments where class distributions are fixed. Their potential for incremental learning of new blocks of training data, after previously been trained, is assessed in terms of generalization error and resource requirements, for several synthetic pattern recognition problems. The advantages and drawbacks of these architectures are discussed for incremental learning with different data block sizes and data set structures. Overall results indicate that Gaussian ARTMAP is the more suitable for incremental learning as it usually provides an error rate that is comparable to that of batch learning for the data sets, and for a wide range of training block sizes. The better performance is a result of the representation of categories as Gaussian distributions, and of using category-specific learning rate that decreases during the training process. With all the data sets, the error rate obtained by training through incremental learning is usually significantly higher than through batch learning for fuzzy ARTMAP. Training fuzzy ARTMAP and Gaussian ARTMAP through incremental learning often requires fewer training epochs to converge, and leads to more compact networks.


IEEE Transactions on Neural Networks | 2009

A Multiscale Scheme for Approximating the Quantron's Discriminating Function

Jean-François Connolly; Richard Labib

Finding an accurate approximation of a discriminating function in order to evaluate its extrema is a common problem in the field of machine learning. A new type of neural network, the Quantron, generates a complicated wave function whose global maximum value is crucial for classifying patterns. To obtain an analytical approximation of this maximum, we present a multiscale scheme based on compactly supported inverted parabolas. Motivated by the Quantrons architecture as well as Laplaces method, this scheme stems from the multiresolution analysis (MRA) developed in the theory of wavelets. This approximation method will be performed, first, one scale at a time and, second, as a global approach. Convergence will be proved and results analyzed.


congress on evolutionary computation | 2010

Evolving ARTMAP neural networks using Multi-Objective Particle Swarm Optimization

Eric Granger; Donavan Prieur; Jean-François Connolly

In this paper, a supervised learning strategy based on a Multi-Objective Particle Swarm Optimization (MOPSO) is introduced for ARTMAP neural networks. It is based on the concept of neural network evolution in that particles of a MOPSO swarm (i.e., network solutions) seek to determine user-defined parameters and network (weights and architecture) such that generalisation error and network resources are minimized. The performance of this strategy has been assessed with fuzzy ARTMAP using synthetic and real-world data for video-based face classification. Simulation results indicate that when the MOPSO strategy is used to train fuzzy ARTMAP, it produces a significantly lower classification error than when trained using standard hyper-parameter settings. Furthermore, the non-dominated MOPSO solutions represent a better compromise between error and resource allocation than mono-objective PSO-based strategies that minimizes only classification error. Overall, results obtained with the MOPSO strategy reveal the importance of optimizing parameters and network for each problem, where both error and resources are minimized during fitness evaluation.


2011 IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (CIBIM) | 2011

Comparing dynamic PSO algorithms for adapting classifier ensembles in video-based face recognition

Jean-François Connolly; Eric Granger; Robert Sabourin

Biometric models are typically designed a priori using limited number of samples acquired from complex environments that change in time during operations. Therefore, these models are often poor representatives of the biometric trait to be recognized. To circumvent this problem, ensemble of classifiers can be used to integrate solutions obtained from multiple diverse classifiers. In this paper, two dynamic particle swarm optimization (DPSO) algorithms are compared for the evolution of classifier ensembles during supervised incremental learning of newly-acquired data samples in video-based face recognition. Using the properties of these population-based optimization algorithms, an incremental DPSO learning strategy for adaptive classification systems (ACSs) is employed to evolve a pool of fuzzy ARTMAP classifiers while an heterogeneous ensemble is selected through a greedy search process that seeks to maximize both performance and diversity. The performance of dynamic niching PSO (DNPSO) and speciation PSO (SPSO) algorithms is assessed in terms of classification rate, resource requirements and diversity for different incremental learning scenarios of new data blocks extracted from real-world video streams. Simulation results indicate that both DPSO algorithms can efficiently create accurate ensembles while reducing computational complexity. In addition, directly selecting representative subswarm particles to form diversified classifier ensembles significantly reduces the computational complexity.

Collaboration


Dive into the Jean-François Connolly's collaboration.

Top Co-Authors

Avatar

Eric Granger

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Robert Sabourin

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Donavan Prieur

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Richard Labib

École Polytechnique de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge