Sylvain Chartier
University of Ottawa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sylvain Chartier.
International journal of psychological research | 2010
Denis Cousineau; Sylvain Chartier
Outliers are observations or measures that are suspicious because they are much smaller or much larger than the vast majority of the observations. These observations are problematic because they may not be caused by the mental process under scrutiny or may not reflect the ability under examination. The problem is that a few outliers is sometimes enough to distort the group results (by altering the mean performance, by increasing variability, etc.). In this paper, various techniques aimed at detecting potential outliers are reviewed. These techniques are subdivided into two classes, the ones regarding univariate data and those addressing multivariate data. Within these two classes, we consider the cases where the population distribution is known to be normal, the population is not normal but known, or the population is unknown. Recommendations will be put forward in each case.
IEEE Transactions on Neural Networks | 2006
Sylvain Chartier; Mounir Boukadoum
Typical bidirectional associative memories (BAM) use an offline, one-shot learning rule, have poor memory storage capacity, are sensitive to noise, and are subject to spurious steady states during recall. Recent work on BAM has improved network performance in relation to noisy recall and the number of spurious attractors, but at the cost of an increase in BAM complexity. In all cases, the networks can only recall bipolar stimuli and, thus, are of limited use for grey-level pattern recall. In this paper, we introduce a new bidirectional heteroassociative memory model that uses a simple self-convergent iterative learning rule and a new nonlinear output function. As a result, the model can learn online without being subject to overlearning. Our simulation results show that this new model causes fewer spurious attractors when compared to others popular BAM networks, for a comparable performance in terms of tolerance to noise and storage capacity. In addition, the novel output function enables it to learn and recall grey-level patterns in a bidirectional way.
IEEE Transactions on Neural Networks | 2005
Sylvain Chartier; Robert Proulx
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the models distinguishing properties.
Neural Networks | 2010
Mahmood Amiri; Hamed Davande; Alireza Sadeghian; Sylvain Chartier
The focus of this paper is to propose a hybrid neural network model for associative recall of analog and digital patterns. This hybrid model consists of self-feedback neural network structures (SFNN) in parallel with generalized regression neural networks (GRNN). Using a new one-shot learning algorithm developed in the paper, pattern representations are first stored as the asymptotically stable fixed points of the SFNN. Then in the retrieving process, each pattern is applied to the GRNN to make the corresponding initial condition and to initiate the dynamical equations of the SFNN that should in turn output the corresponding representation. In this way, the corresponding stored patterns are retrieved even under high noise degradation. Moreover, contrary to many associative memories, the proposed hybrid model is without any spurious attractors and can store both binary and real-value patterns without any preprocessing. Several simulations confirm the theoretical analyses of the model. Results indicate that the performance of the hybrid model is better than that of recurrent associative memory and competitive with other classes of networks.
IEEE Transactions on Neural Networks | 2006
Sylvain Chartier; Mounir Boukadoum
Bidirectional associative memories (BAMs) have been widely used for auto and heteroassociative learning. However, few research efforts have addressed the issue of multistep vector pattern recognition. We propose a model that can perform multi step pattern recognition without the need for a special learning algorithm, and with the capacity to learn more than two pattern series in the training set. The model can also learn pattern series of different lengths and, contrarily to previous models, the stimuli can be composed of gray-level images. The paper also shows that by adding an extra autoassociative layer, the model can accomplish one-to-many association, a task that was exclusive to feedforward networks with context units and error backpropagation learning.
international symposium on neural networks | 2007
Sylvain Chartier; Gyslain Giguère; Patrice Renaud; Jean-Marc Lina; Robert Proulx
In this paper, a new model that can ultimately create its own set of perceptual features is proposed. Using a bidirectional associative memory (BAM)-inspired architecture, the resulting model inherits properties such as attractor-like behavior and successful processing of noisy inputs, while being able to achieve principal component analysis (PCA) tasks such as feature extraction and dimensionality reduction. The model is tested by simulating image reconstruction and blind source separation tasks. Simulations show that the model fares particularly well compared to current neural PCA and independent component analysis (ICA) algorithms. It is argued the model possesses more cognitive explanative power than any other nonlinear/linear PCA and ICA algorithm.
IEEE Transactions on Neural Networks | 2009
Sylvain Chartier; Mounir Boukadoum; Mahmood Amiri
Most bidirectional associative memory (BAM) networks use a symmetrical output function for dual fixed-point behavior. In this paper, we show that by introducing an asymmetry parameter into a recently introduced chaotic BAM output function, prior knowledge can be used to momentarily disable desired attractors from memory, hence biasing the search space to improve recall performance. This property allows control of chaotic wandering, favoring given subspaces over others. In addition, reinforcement learning can then enable a dual BAM architecture to store and recall nonlinearly separable patterns. Our results allow the same BAM framework to model three different types of learning: supervised, reinforcement, and unsupervised. This ability is very promising from the cognitive modeling viewpoint. The new BAM model is also useful from an engineering perspective; our simulations results reveal a notable overall increase in BAM learning and recall performances when using a hybrid model with the general regression neural network (GRNN).
Neural Networks | 2009
Sylvain Chartier; Gyslain Giguère; Dominic Langlois
In this paper, we present a new recurrent bidirectional model that encompasses correlational, competitive and topological model properties. The simultaneous use of many classes of network behaviors allows for the unsupervised learning/categorization of perceptual patterns (through input compression) and the concurrent encoding of proximities in a multidimensional space. All of these operations are achieved within a common learning operation, and using a single set of defining properties. It is shown that the model can learn categories by developing prototype representations strictly from exposition to specific exemplars. Moreover, because the model is recurrent, it can reconstruct perfect outputs from incomplete and noisy patterns. Empirical exploration of the models properties and performance shows that its ability for adequate clustering stems from: (1) properly distributing connection weights, and (2) producing a weight space with a low dispersion level (or higher density). In addition, since the model uses a sparse representation (k-winners), the size of topological neighborhood can be fixed, and no longer requires a decrease through time as was the case with classic self-organizing feature maps. Since the models learning and transmission parameters are independent from learning trials, the model can develop stable fixed points in a constrained topological architecture, while being flexible enough to learn novel patterns.
Journal of Applied Mathematics | 2011
Sylvain Chartier; Mounir Boukadoum
Brain-inspired, artificial neural network approach offers the ability to develop attractors for each pattern if feedback connections are allowed. It also exhibits great stability and adaptability with regards to noise and pattern degradation and can perform generalization tasks. In particular, the Bidirectional Associative Memory (BAM) model has shown great promise for pattern recognition for its capacity to be trained using a supervised or unsupervised scheme. This paper describes such a BAM, one that can encode patterns of real and binary values, perform multistep pattern recognition of variable-size time series and accomplish many-to-one associations. Moreover, it will be shown that the BAM can be generalized to multiple associative memories, and that it can be used to store associations from multiple sources as well. The various behaviors are the result of only topological rearrangements, and the same learning and transmission functions are kept constant throughout the models. Therefore, a consistent architecture is used for different tasks, thereby increasing its practical appeal and modeling importance. Simulations show the BAMs various capacities, by using several types of encoding and recall situations.
International Journal of Strategic Decision Sciences | 2014
Salim Lahmiri; Mounir Boukadoum; Sylvain Chartier
The purpose of this study is to examine three major issues. First, the authors compare the performance of economic information, technical indicators, historical information, and investor sentiment measures in financial predictions using backpropagation neural networks (BPNN). Granger causality tests are applied to each category of information to select the relevant variables that statistically and significantly affect stock market shifts. Second, the authors investigate the effect of combining all of these four categories of information variables selected by Granger causality test on the prediction accuracy. Third, the effectiveness of different numerical techniques on the accuracy of BPNN is explored. The authors include conjugate gradient algorithms (Fletcher-Reeves update, Polak-Ribiere update, Powell-Beale restart), quasi-Newton (Broyden-Fletcher-Goldfarb-Shanno, BFGS), and the Levenberg-Marquardt (LM) algorithm which is commonly used in the literature. Fourth, the authors compare the performance of the BPNN and support vector machine (SVM) in terms of stock market trend prediction. Their comparative study is applied to S&P500 data to predict its future moves. The out-of-sample forecasting results show that (i) historical values and sentiment measures allow obtaining higher accuracy than economic information and technical indicators, (ii) combining the four categories of information does not help improving the accuracy of the BPNN and SVM, (iii) the LM algorithm is outperformed by Polak-Ribiere, Powell-Beale, and Fletcher-Reeves algorithms, and (iv) the BPNN outperforms the SVM except when using sentiment measures as predictive information.