Frédéric Dandurand
McGill University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frédéric Dandurand.
Behavior Research Methods | 2008
Frédéric Dandurand; Thomas R. Shultz; Kristine H. Onishi
Online experiments have recently become very popular, and—in comparison with traditional lab experiments— they may have several advantages, such as reduced demand characteristics, automation, and generalizability of results to wider populations (Birnbaum, 2004; Reips, 2000, 2002a, 2002b). We replicated Dandurand, Bowen, and Shultz’s (2004) lab-based problem-solving experiment as an Internet experiment. Consistent with previous results, we found that participants who watched demonstrations of successful problem-solving sessions or who read instructions outperformed those who were told only that they solved problems correctly or not. Online participants were less accurate than lab participants, but there was no interaction with learning condition. Thus, we conclude that online and Internet results are consistent. Disadvantages included high dropout rate for online participants; however, combining the online experiment with the department subject pool worked well.
Journal of Physiology-paris | 2010
Hervé Glotin; P. Warnier; Frédéric Dandurand; Stéphane Dufau; Bernard Lété; Claude Touzet; Johannes C. Ziegler; Jonathan Grainger
An Adaptive Resonance Theory (ART) network was trained to identify unique orthographic word forms. Each word input to the model was represented as an unordered set of ordered letter pairs (open bigrams) that implement a flexible prelexical orthographic code. The network learned to map this prelexical orthographic code onto unique word representations (orthographic word forms). The network was trained on a realistic corpus of reading textbooks used in French primary schools. The amount of training was strictly identical to childrens exposure to reading material from grade 1 to grade 5. Network performance was examined at each grade level. Adjustment of the learning and vigilance parameters of the network allowed us to reproduce the developmental growth of word identification performance seen in children. The network exhibited a word frequency effect and was found to be sensitive to the order of presentation of word inputs, particularly with low frequency words. These words were better learned with a randomized presentation order compared with the order of presentation in the school books. These results open up interesting perspectives for the application of ART networks in the study of the dynamics of learning to read.
International Journal of Humanoid Robotics | 2007
Thomas R. Shultz; Francois Rivest; László Egri; Jean-Philippe Thivierge; Frédéric Dandurand
The new field of developmental robotics faces the formidable challenge of implementing effective learning mechanisms in complex, dynamic environments. We make a case that knowledge-based learning algorithms might help to meet this challenge. A constructive neural learning algorithm, knowledge-based cascade-correlation (KBCC), autonomously recruits previously-learned networks in addition to the single hidden units recruited by ordinary cascade-correlation. This enables learning by analogy when adequate prior knowledge is available, learning by induction from examples when there is no relevant prior knowledge, and various combinations of analogy and induction. A review of experiments with KBCC indicates that recruitment of relevant existing knowledge typically speeds learning and sometimes enables learning of otherwise impossible problems. Some additional domains of interest to developmental robotics are identified in which knowledge-based learning seems essential. The characteristics of KBCC in relation to other knowledge-based neural learners and analogical reasoning are summarized as is the neurological basis for learning from knowledge. Current limitations of this approach and directions for future work are discussed.
Neural Computation | 2011
Thomas Hannagan; Frédéric Dandurand; Jonathan Grainger
We studied the feedforward network proposed by Dandurand et al. (2010), which maps location-specific letter inputs to location-invariant word outputs, probing the hidden layer to determine the nature of the code. Hidden patterns for words were densely distributed, and K-means clustering on single letter patterns produced evidence that the network had formed semi-location-invariant letter representations during training. The possible confound with superseding bigram representations was ruled out, and linear regressions showed that any word pattern was well approximated by a linear combination of its constituent letter patterns. Emulating this code using overlapping holographic representations (Plate, 1995) uncovered a surprisingly acute and useful correspondence with the network, stemming from a broken symmetry in the connection weight matrix and related to the group-invariance theorem (Minsky & Papert, 1969). These results also explain how the network can reproduce relative and transposition priming effects found in humans.
Connection Science | 2007
Frédéric Dandurand; Vincent G. Berthiaume; Thomas R. Shultz
Cascade-correlation (cascor) networks grow by recruiting hidden units to adjust their computational power to the task being learned. The standard cascor algorithm recruits each hidden unit on a new layer, creating deep networks. In contrast, the flat cascor variant adds all recruited hidden units on a single hidden layer. Student–teacher network approximation tasks were used to investigate the ability of flat and standard cascor networks to learn the input–output mapping of other, randomly initialized flat and standard cascor networks. For low-complexity approximation tasks, there was no significant performance difference between flat and standard student networks. Contrary to the common belief that standard cascor does not generalize well due to cascading weights creating deep networks, we found that both standard and flat cascor generalized well on problems of varying complexity. On high-complexity tasks, flat cascor networks had fewer connection weights and learned with less computational cost than standard networks did.
Connection Science | 2010
Frédéric Dandurand; Jonathan Grainger; Stéphane Dufau
Neural networks were trained with backpropagation to map location-specific letter identities (letters coded as a function of their position in a horizontal array) onto location-invariant lexical representations. Networks were trained on a corpus of 1179 real words, and on artificial lexica in which the importance of letter order was systematically manipulated. Networks were tested with two benchmark phenomena – transposed-letter priming and relative-position priming – thought to reflect flexible orthographic processing in skilled readers. Networks were shown to exhibit the desired priming effects, and the sizes of the effects were shown to depend on the relative importance of letter order information for performing location-invariant mapping. Presenting words at different locations was found to be critical for building flexible orthographic representations in these networks, since this flexibility was absent when stimulus location did not vary.
IEEE Transactions on Autonomous Mental Development | 2009
Frédéric Dandurand; Thomas R. Shultz
We compared computational models and human performance on learning to solve a high-level, planning-intensive problem. Humans and models were subjected to three learning regimes: reinforcement, imitation, and instruction. We modeled learning by reinforcement (rewards) using SARSA, a softmax selection criterion and a neural network function approximator; learning by imitation using supervised learning in a neural network; and learning by instructions using a knowledge-based neural network. We had previously found that human participants who were told if their answers were correct or not (a reinforcement group) were less accurate than participants who watched demonstrations of successful solutions of the task (an imitation group) and participants who read instructions explaining how to solve the task. Furthermore, we had found that humans who learn by imitation and instructions performed more complex solution steps than those trained by reinforcement. Our models reproduced this pattern of results.
Behavior Research Methods | 2010
Frédéric Dandurand; Thomas R. Shultz
Growth phenomena are often nonlinear and may contain spurts, characterized by a local increase in the rate of growth. Because measurement error and noise may produce apparent spurts, it is important to identify systematic and reliable spurts. We describe a system, automatic maxima detection (AMD), for statistically identifying significant spurts and computing (1) point of maximal velocity, when the spurt was most intense; (2) start, when the spurt started; (3) amplitude, the intensity of the spurt; and (4) duration, the length of the spurt. We also introduce a software implementation of AMD in MATLAB. In growth of height data, AMD showed a reliable pubertal growth spurt for most children and a reliable prepubertal spurt for some children. In simulated growth of vocabulary, AMD showed a large global spurt and several minispurts. In real vocabulary growth, AMD showed a few spurts. Advantages of AMD include improvements in objectivity, automaticity, quantification, and comprehensiveness.
international symposium on neural networks | 2004
J.P. Thivierge; Frédéric Dandurand; Thomas R. Shultz
A new type of neural network is introduced, where symbolic rules are combined using a constructive algorithm. Initially, symbolic rules are converted into networks. Rule-based cascade-correlation (RBCC) then grows its architecture by a competitive process where these rule-based networks strive at capturing as much of the error as possible. A pruning technique for RBCC is also introduced, and the performance of the algorithm is assessed both on a simple artificial problem and on a real-world task of DNA splice-junction determination. Results of the real-world problem demonstrate the advantages of RBCC over other related algorithms in terms of processing time and accuracy.
international conference on development and learning | 2007
Frédéric Dandurand; Thomas R. Shultz; Francois Rivest
We previously measured human performance on a complex problem-solving task that involves finding which ball in a set is lighter or heavier than the others with a limited number of weightings. None of the participants found a correct solution within 30 minutes without help of demonstrations or instructions. In this paper, we model human performance on this task using a biologically plausible computational model based on reinforcement learning. We use a SARSA-based Softmax learning algorithm where the reward function is learned using cascade-correlation neural networks. First, we find that the task can be learned by reinforcement alone with substantial training. Second, we study the number of alternative actions available to Softmax and find that 5 works well for this problem which is compatible with estimates of human working memory size. Third, we find that simulations are less accurate than humans given equivalent amount of training We suggest that humans use means-ends analysis to self-generate rewards in non-terminal states. Implementing such self-generated rewards might improve model accuracy. Finally, we pretrain models to prefer simple actions, like humans. We partially capture a simplicity bias, and find that it had little impact on accuracy.