Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George Kachergis is active.

Publication


Featured researches published by George Kachergis.


Psychonomic Bulletin & Review | 2012

An associative model of adaptive inference for learning word-referent mappings

George Kachergis; Chen Yu; Richard M. Shiffrin

People can learn word–referent pairs over a short series of individually ambiguous situations containing multiple words and referents (Yu & Smith, 2007, Cognition 106: 1558–1568). Cross-situational statistical learning relies on the repeated co-occurrence of words with their intended referents, but simple co-occurrence counts cannot explain the findings. Mutual exclusivity (ME: an assumption of one-to-one mappings) can reduce ambiguity by leveraging prior experience to restrict the number of word–referent pairings considered but can also block learning of non-one-to-one mappings. The present study first trained learners on one-to-one mappings with varying numbers of repetitions. In late training, a new set of word–referent pairs were introduced alongside pretrained pairs; each pretrained pair consistently appeared with a new pair. Results indicate that (1) learners quickly infer new pairs in late training on the basis of their knowledge of pretrained pairs, exhibiting ME; and (2) learners also adaptively relax the ME bias and learn two-to-two mappings involving both pretrained and new words and objects. We present an associative model that accounts for both results using competing familiarity and uncertainty biases.


Behavior Research Methods | 2011

Toward a scalable holographic word-form representation

Gregory E. Cox; George Kachergis; Gabriel Recchia; Michael N. Jones

Phenomena in a variety of verbal tasks—for example, masked priming, lexical decision, and word naming—are typically explained in terms of similarity between word-forms. Despite the apparent commonalities between these sets of phenomena, the representations and similarity measures used to account for them are not often related. To show how this gap might be bridged, we build on the work of Hannagan, Dupoux, and Christophe, Cognitive Science 35:79-118, (2011) to explore several methods of representing visual word-forms using holographic reduced representations and to evaluate them on their ability to account for a wide range of effects in masked form priming, as well as data from lexical decision and word naming. A representation that assumes that word-internal letter groups are encoded relative to word-terminal letter groups is found to predict qualitative patterns in masked priming, as well as lexical decision and naming latencies. We then show how this representation can be integrated with the BEAGLE model of lexical semantics (Jones & Mewhort, Psychological Review 114:1–37, 2007) to enable the model to encompass a wider range of verbal tasks.


Philosophical Transactions of the Royal Society B | 2014

A continuous-time neural model for sequential action

George Kachergis; Dean Wyatte; Randall C. O'Reilly; Roy de Kleijn; Bernhard Hommel

Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions.


Topics in Cognitive Science | 2013

Actively Learning Object Names Across Ambiguous Situations

George Kachergis; Chen Yu; Richard M. Shiffrin

Previous research shows that people can use the co-occurrence of words and objects in ambiguous situations (i.e., containing multiple words and objects) to learn word meanings during a brief passive training period (Yu & Smith, 2007). However, learners in the world are not completely passive but can affect how their environment is structured by moving their heads, eyes, and even objects. These actions can indicate attention to a language teacher, who may then be more likely to name the attended objects. Using a novel active learning paradigm in which learners choose which four objects they would like to see named on each successive trial, this study asks whether active learning is superior to passive learning in a cross-situational word learning context. Finding that learners perform better in active learning, we investigate the strategies and discover that most learners use immediate repetition to disambiguate pairings. Unexpectedly, we find that learners who repeat only one pair per trial--an easy way to infer this pair-perform worse than those who repeat multiple pairs per trial. Using a working memory extension to an associative model of word learning with uncertainty and familiarity biases, we investigate individual differences that correlate with these assorted strategies.


Frontiers in Neurorobotics | 2014

Everyday robotic action: lessons from human action control

Roy de Kleijn; George Kachergis; Bernhard Hommel

Robots are increasingly capable of performing everyday human activities such as cooking, cleaning, and doing the laundry. This requires the real-time planning and execution of complex, temporally extended sequential actions under high degrees of uncertainty, which provides many challenges to traditional approaches to robot action control. We argue that important lessons in this respect can be learned from research on human action control. We provide a brief overview of available psychological insights into this issue and focus on four principles that we think could be particularly beneficial for robot control: the integration of symbolic and subsymbolic planning of action sequences, the integration of feedforward and feedback control, the clustering of complex actions into subcomponents, and the contextualization of action-control structures through goal representations.


Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014

Continuous measure of word learning supports associative model

George Kachergis; Chen Yu

Cross-situational learning, the ability to learn word meanings across multiple scenes consisting of multiple words and referents, is thought to be an important tool for language acquisition. The ability has been studied in infants, children, and adults, and yet there is much debate about the basic storage and retrieval mechanisms that operate during cross-situational word learning. It has been difficult to uncover the learning mechanics in part because the standard experimental paradigm, which presents a few words and objects on each of a series of training trials, measures learning only at the end of training after several occurrences of each word-object pair. Thus, the exact learning moment-and its current and historical context-cannot be investigated directly. This paper offers a version of the cross-situational learning task in which a response is made each time a word is heard, as well as in a final test. We compare this to the typical cross-situational learning task, and examine how well the response distributions match two recent computational models of word learning.


Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014

Reward Effects on Sequential Action Learning in a Trajectory Serial Reaction Time Task

George Kachergis; Roy de Kleijn; Floris Berends; Bernhard Hommel

The serial reaction time (SRT) task measures learning of a repeating stimulus sequence as speed up in keypresses, and is used to study implicit and motor learning research which aim to explain complex skill acquisition (e.g., learning to type). However, complex skills involve continuous, temporally-extended movements that are not fully measured in the discrete button presses of the SRT task. Using a movement adaptation of the SRT task in which spatial locations are both stimuli and response options, participants were trained to move the cursor to a continuous sequence of stimuli. Elsewhere we replicated Nissen and Bullemer (1987) [1] with the trajectory SRT paradigm [2]. The current study extends it to the problem of learning complex actions, composed of recurring short sequences of movements that may be rearranged like words. Reaction time and trajectory deflection analyses show that subjects show within-word improvements relative to unpredictable between-word transitions, suggesting that participants learn to segment the sequence according to the statistics of the input.


Proceedings of the Annual Meeting of the Cognitive Science Society | 2009

Frequency and Contextual Diversity Effects in Cross-Situational Word Learning

George Kachergis; Richard M. Shiffrin; Chen Yu


Proceedings of the Annual Meeting of the Cognitive Science Society | 2009

Temporal Contiguity in Cross-Situational Statistical Learning

George Kachergis; Richard M. Shiffrin; Chen Yu


Cognitive Science | 2012

Learning Nouns with Domain-General Associative Learning Mechanisms

George Kachergis

Collaboration


Dive into the George Kachergis's collaboration.

Top Co-Authors

Avatar

Richard M. Shiffrin

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert L. Goldstone

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge