Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pyeong Whan Cho is active.

Publication


Featured researches published by Pyeong Whan Cho.


Linguistics Vanguard | 2017

Incremental parsing in a continuous dynamical system: Sentence processing in gradient symbolic computation

Pyeong Whan Cho; Matthew Goldrick; Paul Smolensky

Abstract Any incremental parser must solve two computational problems: (1) maintaining all interpretations consistent with the words that have been processed so far and (2) excluding all globally-incoherent interpretations. While these problems are well understood, it is not clear how the dynamic, continuous mechanisms that underlie human language processing solve them. We introduce a Gradient Symbolic Computation (GSC) parser, a continuous-state, continuous-time stochastic dynamical-system model of symbolic processing, which builds up a discrete symbolic structure gradually by dynamically strengthening a discreteness constraint. Online, interactive tutorials with open-source software are presented in a companion website. Our results reveal that the GSC parser can solve the two computational problems by moving to a non-discrete blend state that evolves exclusively to discrete states representing contextually-appropriate globally-coherent interpretations. In a simulation study using a simple formal grammar, we show that successful parsing requires appropriate control of the discreteness constraint strength (a quantization policy). With inappropriate quantization policies, the GSC parser makes mistakes that mirror those made in natural language comprehension (garden-path or local-coherence errors). These findings suggest that the GSC model offers a neurally plausible solution to these two core problems.


Frontiers in Psychology | 2016

Discovery of a Recursive Principle: An Artificial Grammar Investigation of Human Learning of a Counting Recursion Language.

Pyeong Whan Cho; Emily Szkudlarek; Whitney Tabor

Learning is typically understood as a process in which the behavior of an organism is progressively shaped until it closely approximates a target form. It is easy to comprehend how a motor skill or a vocabulary can be progressively learned—in each case, one can conceptualize a series of intermediate steps which lead to the formation of a proficient behavior. With grammar, it is more difficult to think in these terms. For example, center embedding recursive structures seem to involve a complex interplay between multiple symbolic rules which have to be in place simultaneously for the system to work at all, so it is not obvious how the mechanism could gradually come into being. Here, we offer empirical evidence from a new artificial language (or “artificial grammar”) learning paradigm, Locus Prediction, that, despite the conceptual conundrum, recursion acquisition occurs gradually, at least for a simple formal language. In particular, we focus on a variant of the simplest recursive language, anbn, and find evidence that (i) participants trained on two levels of structure (essentially ab and aabb) generalize to the next higher level (aaabbb) more readily than participants trained on one level of structure (ab) combined with a filler sentence; nevertheless, they do not generalize immediately; (ii) participants trained up to three levels (ab, aabb, aaabbb) generalize more readily to four levels than participants trained on two levels generalize to three; (iii) when we present the levels in succession, starting with the lower levels and including more and more of the higher levels, participants show evidence of transitioning between the levels gradually, exhibiting intermediate patterns of behavior on which they were not trained; (iv) the intermediate patterns of behavior are associated with perturbations of an attractor in the sense of dynamical systems theory. We argue that all of these behaviors indicate a theory of mental representation in which recursive systems lie on a continuum of grammar systems which are organized so that grammars that produce similar behaviors are near one another, and that people learning a recursive system are navigating progressively through the space of these grammars.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2014

Lexical Interference Effects in Sentence Processing: Evidence From the Visual World Paradigm and Self-Organizing Models

Anuenue Kukona; Pyeong Whan Cho; James S. Magnuson; Whitney Tabor


Topics in Cognitive Science | 2013

Fractal Analysis Illuminates the Form of Connectionist Structural Gradualness

Whitney Tabor; Pyeong Whan Cho; Emily Szkudlarek


Cognitive Science | 2011

An Artificial Grammar Investigation into the Mental Encoding of Syntactic Structure

Pyeong Whan Cho; Emily Szkudlarek; Anuenue Kukona; Whitney Tabor


north american chapter of the association for computational linguistics | 2012

Fractal Unfolding: A Metamorphic Approach to Learning to Parse Recursive Structure

Whitney Tabor; Pyeong Whan Cho; Emily Szkudlarek


Proceedings of the Annual Meeting of the Cognitive Science Society | 2010

Effects of Anticipatory Coarticulation on Lexical Access

Stephen Tobin; Pyeong Whan Cho; Patrick Jennet; James S. Magnuson


Proceedings of the Annual Meeting of the Cognitive Science Society | 2009

Extraordinary Natural Ability: Anagram Solution as an Extension of Normal Reading Ability

Emma Accorsi; Pyeong Whan Cho; Jonathan Henin; Whitney Tabor


Cognitive Science | 2016

Bifurcation analysis of a Gradient Symbolic Computation model of incremental processing.

Pyeong Whan Cho; Paul Smolensky


Archive | 2018

Discrete symbolic optimization and Boltzmann sampling by continuous neural dynamics: Gradient Symbolic Computation.

Paul Tupper; Paul Smolensky; Pyeong Whan Cho

Collaboration


Dive into the Pyeong Whan Cho's collaboration.

Top Co-Authors

Avatar

Whitney Tabor

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Smolensky

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen Tobin

University of Connecticut

View shared research outputs
Researchain Logo
Decentralizing Knowledge