Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix Hao Wang is active.

Publication


Featured researches published by Felix Hao Wang.


Cognitive Psychology | 2014

Word Categorization From Distributional Information: Frames Confer More Than the Sum of Their (Bigram) Parts

Toben H. Mintz; Felix Hao Wang; Jia Li

Grammatical categories, such as noun and verb, are the building blocks of syntactic structure and the components that govern the grammatical patterns of language. However, in many languages words are not explicitly marked with their category information, hence a critical part of acquiring a language is categorizing the words. Computational analyses of child-directed speech have shown that distributional information-information about how words pattern with one another in sentences-could be a useful source of initial category information. Yet questions remain as to whether learners use this kind of information, and if so, what kinds of distributional patterns facilitate categorization. In this paper we investigated how adults exposed to an artificial language use distributional information to categorize words. We compared training situations in which target words occurred in frames (i.e., surrounded by two words that frequently co-occur) against situations in which target words occurred in simpler bigram contexts (where an immediately adjacent word provides the context for categorization). We found that learners categorized words together when they occurred in similar frame contexts, but not when they occurred in similar bigram contexts. These findings are particularly relevant because they accord with computational investigations showing that frame contexts provide accurate category information cross-linguistically. We discuss these findings in the context of prior research on distribution-based categorization and the broader implications for the role of distributional categorization in language acquisition.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2018

Learning Nonadjacent Dependencies Embedded in Sentences of an Artificial Language: When Learning Breaks Down.

Felix Hao Wang; Toben H. Mintz

The structure of natural languages give rise to many dependencies in the linear sequences of words, and within words themselves. Detecting these dependencies is arguably critical for young children in learning the underlying structure of their language. There is considerable evidence that human adults and infants are sensitive to the statistical properties of sequentially adjacent items. However, the conditions under which learners detect nonadjacent dependencies (NADs) appears to be much more limited. This has resulted in proposals that the kinds of learning mechanisms learners deploy in processing adjacent dependencies are fundamentally different from those deployed in learning NADs. Here we challenge this view. In 4 experiments, we show that learning both kinds of dependencies is hindered in conditions when they are embedded in longer sequences of words, and facilitated when they are isolated by silences. We argue that the findings from the present study and prior research is consistent with a theory that similar mechanisms are deployed for adjacent and nonadjacent dependency learning, but that NAD learning is simply computationally more complex. Hence, in some situations NAD learning is only successful when constraining information is provided, but critically, that additional information benefits adjacent dependency learning in similar ways.


Journal of Experimental Psychology: General | 2017

Top-down structure influences learning of nonadjacent dependencies in an artificial language.

Felix Hao Wang; Jason D. Zevin; Toben H. Mintz

Because of the hierarchical organization of natural languages, words that are syntactically related are not always linearly adjacent. For example, the subject and verb in the child always runs agree in person and number, although they are not adjacent in the sequences of words. Since such dependencies are indicative of abstract linguist structure, it is of significant theoretical interest how these relationships are acquired by language learners. Most experiments that investigate nonadjacent dependency (NAD) learning have used artificial languages in which the to-be-learned dependencies are isolated, by presenting the minimal sequences that contain the dependent elements. However, dependencies in natural language are not typically isolated in this way. We report the first demonstration to our knowledge of successful learning of embedded NADs, in which silences do not mark dependency boundaries. Subjects heard passages of English with a predictable structure, interspersed with passages of the artificial language. The English sentences were designed to induce boundaries in the artificial languages. In Experiment 1 & 3 the artificial NADs were contained within the induced boundaries and subjects learned them, whereas in Experiment 2 & 4, the NADs crossed the induced boundaries and subjects did not learn them. We take this as evidence that sentential structure was “carried over” from the English sentences and used to organize the artificial language. This approach provides several new insights into the basic mechanisms of NAD learning in particular and statistical learning in general.


Behavioral and Brain Sciences | 2016

Language acquisition is model-based rather than model-free.

Felix Hao Wang; Toben H. Mintz

Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.


Cognition | 2018

The role of reference in cross-situational word learning

Felix Hao Wang; Toben H. Mintz


Archive | 2017

The Role of Reference in Cross-Situational Word Learning (2018, Cognition)

Felix Hao Wang; Toben H. Mintz


Cognitive Science | 2017

A New Model of Statistical Learning: Trajectories Through Perceptual Similarity Space.

Elizabeth A. Hutton; Felix Hao Wang; Jason D. Zevin


Cognitive Science | 2015

Cross-situational Word Learning Results in Explicit Memory Representations.

Felix Hao Wang; Toben H. Mintz


Cognitive Science | 2015

Characterizing the Difference Between Learning about Adjacent and Non-adjacent Dependencies.

Felix Hao Wang; Toben H. Mintz


Cognitive Science | 2015

Statistical Structures in Artificial languages Prime Relative Clause Attachment Biases in English.

Felix Hao Wang; Mythili Menon; Elsi Kaiser

Collaboration


Dive into the Felix Hao Wang's collaboration.

Top Co-Authors

Avatar

Toben H. Mintz

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jason D. Zevin

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Elsi Kaiser

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jia Li

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge