Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael N. Jones is active.

Publication


Featured researches published by Michael N. Jones.


Psychological Review | 2007

Representing word meaning and order information in a composite holographic lexicon.

Michael N. Jones; D. J. K. Mewhort

The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses simple convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic representations for words. The structure of the resulting lexicon can account for empirical data from classic experiments studying semantic typicality, categorization, priming, and semantic constraint in sentence completions. Furthermore, order information can be retrieved from the holographic representations, allowing the model to account for limited word transitions without the need for built-in transition rules. The model demonstrates that a broad range of psychological data can be accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. The holographic representations are an appropriate knowledge representation to be used by higher order models of language comprehension, relieving the complexity required at the higher level.


Psychological Review | 2012

Optimal foraging in semantic memory

Thomas T. Hills; Michael N. Jones; Peter M. Todd

Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared structure of the search problems-searching in patchy environments-and recent evidence supporting a domain-general cognitive search process. To investigate these questions directly, we asked participants to recover from memory as many animal names as they could in 3 min. Memory search was modeled over a representation of the semantic search space generated from the BEAGLE memory model of Jones and Mewhort (2007), via a search process similar to models of associative memory search (e.g., Raaijmakers & Shiffrin, 1981). We found evidence for local structure (i.e., patches) in memory search and patch depletion preceding dynamic local-to-global transitions between patches. Dynamic models also significantly outperformed nondynamic models. The timing of dynamic local-to-global transitions was consistent with optimal search policies in space, specifically the marginal value theorem (Charnov, 1976), and participants who were more consistent with this policy recalled more items.


Cognition | 2009

Activating event knowledge

Mary Hare; Michael N. Jones; Caroline Thomson; Sarah Kelly; Ken McRae

An increasing number of results in sentence and discourse processing demonstrate that comprehension relies on rich pragmatic knowledge about real-world events, and that incoming words incrementally activate such knowledge. If so, then even outside of any larger context, nouns should activate knowledge of the generalized events that they denote or typically play a role in. We used short stimulus onset asynchrony priming to demonstrate that (1) event nouns prime people (sale-shopper) and objects (trip-luggage) commonly found at those events; (2) location nouns prime people/animals (hospital-doctor) and objects (barn-hay) commonly found at those locations; and (3) instrument nouns prime things on which those instruments are commonly used (key-door), but not the types of people who tend to use them (hose-gardener). The priming effects are not due to normative word association. On our account, facilitation results from event knowledge relating primes and targets. This has much in common with computational models like LSA or BEAGLE in which one word primes another if they frequently occur in similar contexts. LSA predicts priming for all six experiments, whereas BEAGLE correctly predicted that priming should not occur for the instrument-people relation but should occur for the other five. We conclude that event-based relations are encoded in semantic memory and computed as part of word meaning, and have a strong influence on language comprehension.


Behavior Research Methods | 2009

More data trumps smarter algorithms: Comparing pointwise mutual information with latent semantic analysis

Gabriel Recchia; Michael N. Jones

Computational models of lexical semantics, such as latent semantic analysis, can automatically generate semantic similarity measures between words from statistical redundancies in text. These measures are useful for experimental stimulus selection and for evaluating a model’s cognitive plausibility as a mechanism that people might use to organize meaning in memory. Although humans are exposed to enormous quantities of speech, practical constraints limit the amount of data that many current computational models can learn from. We follow up on previous work evaluating a simple metric of pointwise mutual information. Controlling for confounds in previous work, we demonstrate that this metric benefits from training on extremely large amounts of data and correlates more closely with human semantic similarity ratings than do publicly available implementations of several more complex models. We also present a simple tool for building simple and scalable models from large corpora quickly and efficiently.


Topics in Cognitive Science | 2011

Redundancy in Perceptual and Linguistic Experience: Comparing Feature-Based and Distributional Models of Semantic Representation

Brian Riordan; Michael N. Jones

Since their inception, distributional models of semantics have been criticized as inadequate cognitive theories of human semantic learning and representation. A principal challenge is that the representations derived by distributional models are purely symbolic and are not grounded in perception and action; this challenge has led many to favor feature-based models of semantic representation. We argue that the amount of perceptual and other semantic information that can be learned from purely distributional statistics has been underappreciated. We compare the representations of three feature-based and nine distributional models using a semantic clustering task. Several distributional models demonstrated semantic clustering comparable with clustering-based on feature-based representations. Furthermore, when trained on child-directed speech, the same distributional models perform as well as sensorimotor-based feature representations of childrens lexical semantic knowledge. These results suggest that, to a large extent, information relevant for extracting semantic categories is redundantly coded in perceptual and linguistic experience. Detailed analyses of the semantic clusters of the feature-based and distributional models also reveal that the models make use of complementary cues to semantic organization from the two data streams. Rather than conceptualizing feature-based and distributional models as competing theories, we argue that future focus should be on understanding the cognitive mechanisms humans use to integrate the two sources.


Canadian Journal of Experimental Psychology | 2012

The Role of Semantic Diversity in Lexical Organization

Michael N. Jones; Brendan T. Johns; Gabriel Recchia

Recent research has challenged the notion that word frequency is the organizing principle underlying lexical access, pointing instead to the number of contexts that a word occurs in (Adelman, Brown, & Quesada, 2006). Counting contexts gives a better quantitative fit to human lexical decision and naming data than counting raw occurrences of words. However, this approach ignores the information redundancy of the contexts in which the word occurs, a factor we refer to as semantic diversity. Using both a corpus-based study and a controlled artificial language experiment, we demonstrate the importance of contextual redundancy in lexical access, suggesting that contextual repetitions in language only increase a words memory strength if the repetitions are accompanied by a modulation in semantic context. We introduce a cognitive process mechanism to explain the pattern of behaviour by encoding the words context relative to the information redundancy between the current context and the words current memory representation. The model gives a better account of identification latency data than models based on either raw frequency or document count, and also produces a better-organized space to simulate semantic similarity.


Frontiers in Human Neuroscience | 2012

The semantic richness of abstract concepts.

Gabriel Recchia; Michael N. Jones

We contrasted the predictive power of three measures of semantic richness—number of features (NFs), contextual dispersion (CD), and a novel measure of number of semantic neighbors (NSN)—for a large set of concrete and abstract concepts on lexical decision and naming tasks. NSN (but not NF) facilitated processing for abstract concepts, while NF (but not NSN) facilitated processing for the most concrete concepts, consistent with claims that linguistic information is more relevant for abstract concepts in early processing. Additionally, converging evidence from two datasets suggests that when NSN and CD are controlled for, the features that most facilitate processing are those associated with a concepts physical characteristics and real-world contexts. These results suggest that rich linguistic contexts (many semantic neighbors) facilitate early activation of abstract concepts, whereas concrete concepts benefit more from rich physical contexts (many associated objects and locations).


Psychological Science | 2015

The Words Children Hear Picture Books and the Statistics for Language Learning

Jessica L. Montag; Michael N. Jones; Linda B. Smith

Young children learn language from the speech they hear. Previous work suggests that greater statistical diversity of words and of linguistic contexts is associated with better language outcomes. One potential source of lexical diversity is the text of picture books that caregivers read aloud to children. Many parents begin reading to their children shortly after birth, so this is potentially an important source of linguistic input for many children. We constructed a corpus of 100 children’s picture books and compared word type and token counts in that sample and a matched sample of child-directed speech. Overall, the picture books contained more unique word types than the child-directed speech. Further, individual picture books generally contained more unique word types than length-matched, child-directed conversations. The text of picture books may be an important source of vocabulary for young children, and these findings suggest a mechanism that underlies the language benefits associated with reading to children.


Topics in Cognitive Science | 2012

Perceptual inference through global lexical similarity.

Brendan T. Johns; Michael N. Jones

The literature contains a disconnect between accounts of how humans learn lexical semantic representations for words. Theories generally propose that lexical semantics are learned either through perceptual experience or through exposure to regularities in language. We propose here a model to integrate these two information sources. Specifically, the model uses the global structure of memory to exploit the redundancy between language and perception in order to generate inferred perceptual representations for words with which the model has no perceptual experience. We test the model on a variety of different datasets from grounded cognition experiments and demonstrate that this diverse set of results can be explained as perceptual simulation (cf. Barsalou, Simmons, Barbey, & Wilson, 2003) within a global memory model.


Psychonomic Bulletin & Review | 2010

Evaluating the random representation assumption of lexical semantics in cognitive models

Brendan T. Johns; Michael N. Jones

A common assumption implicit in cognitive models is that lexical semantics can be approximated by using randomly generated representations to stand in for word meaning. However, the use of random representations contains the hidden assumption that semantic similarity is symmetrically distributed across randomly selected words or between instances within a semantic category. We evaluated this assumption by computing similarity distributions for randomly selected words from a number of well-known semantic measures and comparing them with the distributions from random representations commonly used in cognitive models. The similarity distributions from all semantic measures were positively skewed compared with the symmetric normal distributions assumed by random representations. We discuss potential consequences that this false assumption may have for conclusions drawn from process models that use random representations.

Collaboration


Dive into the Michael N. Jones's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter M. Todd

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Tal Yarkoni

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Thomas M. Gruenenfelder

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge