Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julien Mayor is active.

Publication


Featured researches published by Julien Mayor.


Psychological Review | 2010

A neurocomputational account of taxonomic responding and fast mapping in early word learning

Julien Mayor; Kim Plunkett

We present a neurocomputational model with self-organizing maps that accounts for the emergence of taxonomic responding and fast mapping in early word learning, as well as a rapid increase in the rate of acquisition of words observed in late infancy. The quality and efficiency of generalization of word-object associations is directly related to the quality of prelexical, categorical representations in the model. We show how synaptogenesis supports coherent generalization of word-object associations and show that later synaptic pruning minimizes metabolic costs without being detrimental to word learning. The role played by joint-attentional activities is identified in the model, both at the level of selecting efficient cross-modal synapses and at the behavioral level, by accelerating and refining overall vocabulary acquisition. The model can account for the qualitative shift in the way infants use words, from an associative to a referential-like use, for the pattern of overextension errors in production and comprehension observed during early childhood and typicality effects observed in lexical development. Interesting by-products of the model include a potential explanation of the shift from prototype to exemplar-based effects reported for adult category formation, an account of mispronunciation effects in early lexical development, and extendability to include accounts of individual differences in lexical development and specific disorders such as Williams syndrome. The model demonstrates how an established constraint on lexical learning, which has often been regarded as domain-specific, can emerge from domain-general learning principles that are simultaneously biologically, psychologically, and socially plausible.


Cognitive Science | 2009

Labels as Features (Not Names) for Infant Categorization: A Neurocomputational Approach

Valentina Gliozzi; Julien Mayor; Jon Fan Hu; Kim Plunkett

A substantial body of experimental evidence has demonstrated that labels have an impact on infant categorization processes. Yet little is known regarding the nature of the mechanisms by which this effect is achieved. We distinguish between two competing accounts: supervised name-based categorization and unsupervised feature-based categorization. We describe a neurocomputational model of infant visual categorization, based on self-organizing maps, that implements the unsupervised feature-based approach. The model successfully reproduces experiments demonstrating the impact of labeling on infant visual categorization reported in Plunkett, Hu, and Cohen (2008). It mimics infant behavior in both the familiarization and testing phases of the procedure, using a training regime that involves only single presentations of each stimulus and using just 24 participant networks per experiment. The model predicts that the observed behavior in infants is due to a transient form of learning that might lead to the emergence of hierarchically organized categorical structure and that the impact of labels on categorization is influenced by the perceived similarity and the sequence in which the objects are presented. The results suggest that early in development, say before 12 months old, labels need not act as invitations to form categories nor highlight the commonalities between objects, but they may play a more mundane but nevertheless powerful role as additional features that are processed in the same fashion as other features that characterize objects and object categories.


Journal of Memory and Language | 2014

Infant word recognition: Insights from TRACE simulations.

Julien Mayor; Kim Plunkett

Highlights • We reconcile conflicting theories about vowels and consonants in early perception.• Mispronunciation sensitivity is modulated by the size and structure of the lexicon.• Asymmetries in mispronunciation can be explained with a fully specified phonology.• Inhibition at a phoneme level and/or at a lexical level is likely reduced in infancy.• Claim that words from dense neighbourhoods are harder to learn needs a stronger test.


Developmental Science | 2014

Shared understanding and idiosyncratic expression in early vocabularies.

Julien Mayor; Kim Plunkett

To what extent do toddlers have shared vocabularies? We examined CDI data collected from 14,607 infants and toddlers in five countries and measured the amount of variability between individual lexicons during development for both comprehension and production. Early lexicons are highly overlapping. However, beyond 100 words, toddlers share more words with other toddlers in comprehension than in production, even when matched for lexicon sizes. This finding points to a structural difference in early comprehension and production: Toddlers are generalists in comprehension but develop a unique, expressive voice. Variability in production decreases after two years of age, suggesting convergence to a common expressive core vocabulary. We discuss potential exogenous and endogenous contributions to the inverted U-shaped development observed in young childrens expressive lexical variability.


Frontiers in Psychology | 2010

Are Scientists Nearsighted Gamblers? The Misleading Nature of Impact Factors

Julien Mayor

Despite a “Cambrian” explosion in the number of citation metrics used (Van Noorden, 2010), the impact factor (IF) of a journal remains a decisive factor of choice when publishing your ultimate research results and evaluating research productivity. Most other citation metrics correlate with the IF and there is little doubt that they reflect the overall impact of different journals. However, there is good reason to be more cautious about IF judgments. First, the distribution of the number of citations per paper (NCPP) within a journal is heavily skewed. A few highly cited papers often account for a significant amount of the total citation count of a journal (25% of the papers in Nature account for 89% of the IF “Not-so-deep impact,” 2005) and a recent report highlighted that even a single article can dramatically bias the IF of a small journal (Dimitrov et al., 2010). The mean NCPP, as captured with the IF, should therefore never be used. A more appropriate measure is the median NCPP. Figure ​Figure11 (left) plots the median of the total NCPP against the mean, for three potential publication outlets for psychologists; Psychological Review (IF2009 = 9.1), Nature (IF2009 = 34.5), and Psychological Science (IF2009 = 5.1), for different years (data compiled from ISI Web of Knowledge). Nature follows a distinct trend when compared to specialist journals: the median seems independent from its mean. This apparent dissociation results from the skew of the citation distribution observed in Nature, in which up to 35–40% of the published articles are never cited. Moreover, when using a robust metric for skewed distributions, Psychological Sciences median is about seven times higher than Natures even though its IF is about seven times lower than Natures. This discrepancy even holds for specialized journals with very low impact factors; despite possessing an IF nearly 35 times lower than Natures, the Journal of Child Languages median is higher than Natures (data not shown). Figure 1 (A) Median number of citations per article as a function of the mean, for different years. (B) Time course of citations for articles published in 1995. A second interesting property of the IF is that it focuses on citations of recently published articles only. For example, the 2009 IF of a journal considers the number of citations in 2009 to articles published in 2007 and 2008 only. However, the citations’ time course differs dramatically from one journal to another. Figure ​Figure11 (right) reports the mean NCPP per year, as a function of the number of years since publication. Once again, Nature possesses a distinct trend in its citation profile – the number of citations peaks 2 years after publication – whereas specialist journals have a steady increase in the NCPP. An article in Psychological Review will see its influence grow with age and would be outperformed by Nature when the IF monitors only short-lived citation patterns. From this perspective, the IF, commonly accepted as golden standard for performance metrics seems to reward high-risk strategies (after all your Nature article has only slightly over 50% chance of being ever cited! ), and short-lived outbursts. Are scientists then nearsighted gamblers?


Journal of Physiology-paris | 2004

Transient information flow in a network of excitatory and inhibitory model neurons: role of noise and signal autocorrelation.

Julien Mayor; Wulfram Gerstner

We investigate the performance of sparsely-connected networks of integrate-and-fire neurons for ultra-short term information processing. We exploit the fact that the population activity of networks with balanced excitation and inhibition can switch from an oscillatory firing regime to a state of asynchronous irregular firing or quiescence depending on the rate of external background spikes. We find that in terms of information buffering the network performs best for a moderate, non-zero, amount of noise. Analogous to the phenomenon of stochastic resonance the performance decreases for higher and lower noise levels. The optimal amount of noise corresponds to the transition zone between a quiescent state and a regime of stochastic dynamics. This provides a potential explanation of the role of non-oscillatory population activity in a simplified model of cortical micro-circuits.


international conference on artificial neural networks | 2003

Online processing of multiple inputs in a sparsely-connected recurrent neural network

Julien Mayor; Wulfram Gerstner

The storage and short-term memory capacities of recurrent neural networks of spiking neurons are investigated.We demonstrate that it is possible to process online many superimposed streams of input. This is despite the fact that the stored information is spread throughout the network. We show that simple output structures are powerful enough to extract the diffuse information from the network. The dimensional blow up, which is crucial in kernel methods, is efficiently achieved by the dynamics of the network itself.


Child Development | 2016

Children's Faithfulness in Imitating Language Use Varies Cross-Culturally, Contingent on Prior Experience

Jörn Klinger; Julien Mayor; Colin Bannard

Despite its recognized importance for cultural transmission, little is known about the role imitation plays in language learning. Three experiments examine how rates of imitation vary as a function of qualitative differences in the way language is used in a small indigenous community in Oaxaca, Mexico and three Western comparison groups. Data from one hundred thirty-eight 3- to 10-year-olds suggests that children selectively imitate when they understand the function of a given linguistic element because their culture makes frequent use of that function. When function is opaque, however, children imitate faithfully. This has implications for how children manage the imitation-innovation trade-off, and offers insight into why children imitate in language learning across development.


Neuroreport | 2005

Noise-enhanced computation in a model of a cortical column

Julien Mayor; Wulfram Gerstner

Varied sensory systems use noise in order to enhance detection of weak signals. It has been conjectured in the literature that this effect, known as stochastic resonance, may take place in central cognitive processes such as memory retrieval of arithmetical multiplication. We show, in a simplified model of cortical tissue, that complex arithmetical calculations can be carried out and are enhanced in the presence of a stochastic background. The performance is shown to be positively correlated to the susceptibility of the network, defined as its sensitivity to a variation of the mean of its inputs. For nontrivial arithmetic tasks such as multiplication, stochastic resonance is a collective property of the microcircuitry of the model network.


Frontiers in Psychology | 2014

Connectionism coming of age: legacy and future challenges.

Julien Mayor; Pablo Gomez; Franklin Chang; Gary Lupyan

In 1986, Rumelhart and McClelland took the cognitive science community by storm with the Parallel Distributed Processing (PDP) framework. Rather than abstracting from the biological substrate as was sought by the “information processing” paradigms of the 1970s, connectionism, as it has come to be called, embraced it. An immediate appeal of the connectionist agenda was its aim: to construct at the algorithmic level models of cognition that were compatible with their implementation in the biological substrate. The PDP group argued that this could be achieved by turning to networks of artificial neurons, originally introduced by McCulloch and Pitts (1943) which the group showed were able to provide insights into a wide range of psychological domains, from categorization, to perception, to memory, to language. This work built on an earlier formulation by Rosenblatt (1958) who introduced a simple type of feed-forward neural network called the perceptron. Perceptrons were limited to solving simple linearly-separable problems and although networks composed of perceptrons were known to be able to compute any Boolean function (including XOR, Minsky and Papert, 1969), there was no effective way of training such networks. In 1986, Rumelhart, Hinton and Williams introduced the back-propagation algorithm, providing an effective way of training multi-layered neural networks, which could easily learn non linearly-separable functions. In addition to providing the field with an effective learning algorithm, the PDP group published a series of demonstrations of how long standing questions in cognitive psychology could be elegantly solved using simple learning rules, distributed representations, and interactive processing. To take a classic example, consider the word-superiority effect, in which people can detect letters within a word faster than individual letters or letters within a non-word (Reicher, 1969). This result is difficult to square with serial “information-processing” theories of cognition that were dominant at the time (how could someone recognize “R” before “FRIEND” if recognizing the word required recognizing the letters?). Accounting for such findings demanded a framework which could naturally accommodate interactive processes within a bidirectional flow of information. The so-called “Interactive-activation model” (McClelland and Rumelhart, 1981) provided just such a framework. The connectionist paradigm was not without its critics. The principal critiques can be divided into three classes. First, some neuroscientists (Crick, 1989) questioned the biological plausibility of backpropagation, when they failed to observe experimentally complex and differentiated back-propagating signals that are required to learn in multi-layered neural networks. A second critique concerned stability-plasticity of the learned representations in these models. Some phenomena require the ability to rapidly learn new information, but sometimes newly learned knowledge overwrites previously learned information (catastrophic interference; McCloskey and Cohen, 1989). Third, representing spatial and temporal invariance—something that apparently came easily to people—was difficult for models, e.g., recognizing that the letter “T” in “TOM” was the “same” as the “T” in “POT.” This invariance problem was typically solved by multiplying a large number of hard-wired units that were space- or time-locked (see e.g., McClelland and Elman, 1986). Finally, critics pointed out that the networks were incapable of learning true rules on which a number of human behavioral, namely language-learning was thought to depend (e.g., Marcus, 2003; cf. Fodor and Pylyshyn, 1988; Seidenberg, 1999). The connectionist approach has embraced these challenges: Although some connectionist models continue to rely on backpropagation, others have moved to more biologically realistic learning rules (Giese and Poggio, 2003; Masquelier and Thorpe, 2007). Far from being a critical flaw of connectionism, the phenomenon of catastrophic interference (Mermillod et al., 2013) proved to be a feature that led to the development of complementary learning systems (McClelland et al., 1995). Progress has also been made on the invariance problem. For example, within the speech domain representing the similarity between similar speech sounds regardless of their location within a word has been addressed in the past by Grossberg and Myers (2000) and Norris (1994) and this issue presents a new more streamlined and computationally efficient model (Hannagan et al., 2013). An especially powerful approach to solving the location invariance problem in the visual domain is presented by Di Bono and Zorzi (2013), also in this issue. A key challenge for connectionism is to explain the learning of abstract structural representations. The use of recurrent networks (Elman, 1990; Dominey, 2013) and self-organizing maps, has captured important aspects of language learning (e.g., Mayor and Plunkett, 2010; Li and Zhao, 2013), while work on deep learning (Hinton and Salakhutdinov, 2006) has made it possible to model the emergence of structured and abstract representations within multi-layered hierarchical networks (Zorzi et al., 2013). The work on verbal analogies by Kollias and McClelland (2013) continues to address the challenges of modeling more abstract representations, but truly understanding how neural architectures give rise to symbolic cognition is a gap that remains. Although learning and representing formal language rules may not be completely outside of the abilities of neural networks (e.g., Chang, 2009), it seems clear that understanding human cognition requires understanding how we solve these symbolic problems (Clark and Karmiloff-Smith, 1993; Lupyan, 2013). Future generations of connectionist modelers may wish to fill this gap and in so doing provide a fuller picture of how neural networks give rise to intelligence of the sort enables us to ponder the very workings of our cognition.

Collaboration


Dive into the Julien Mayor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wulfram Gerstner

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Ruh

Oxford Brookes University

View shared research outputs
Top Co-Authors

Avatar

Gary Lupyan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Jörn Klinger

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge