Alan Bale
Concordia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan Bale.
Journal of Semantics | 2009
Alan Bale; David Barner
Comparative judgments for mass and count nouns yield two generalizations. First, all words that can be used in both mass and count syntax (e.g., rock, string, apple, water) always denote individuals when used in count syntax but never when used in mass syntax (e.g. too many rocks vs. too much rock). Second, some mass nouns denote individuals (e.g., furniture) while others do not (e.g., water). In this article, we show that no current theory of mass-count semantics can capture these two facts and argue for an alternative theory that can. We propose that lexical roots are not specified as mass or count. Rather, a root becomes a mass noun or count noun by combining with a functional head. Some roots have denotations with individuals while others do not. The count head is interpreted as a function that maps denotations without individuals to those with individuals. The mass head is interpreted as an identity function making the interpretation of a mass noun equivalent to the interpretation of the root. As a result, all count nouns have individuals in their denotation, whereas mass counterparts of count nouns do not. Also, some roots that have individuals in their denotations can be used as mass nouns to denote individuals.
Lingua | 2002
David Barner; Alan Bale
Abstract It is often assumed that the primitive units of grammar are words that are marked for grammatical category (e.g., DiSciullo, A.M., Williams, E., 1987. On the Definition of Word: MIT Press, Cambridge, MA). Based on a review of research in linguistics, neurolinguistics, and developmental psychology, we argue that dividing the lexicon into categories such as noun and verb offers no descriptive edge, and adds unnecessary complexity to both the theory of grammar and language acquisition. Specifically, we argue that a theory without lexical categories provides a better account of creative language use and category-specific neurological deficits, while also offering a natural solution to the bootstrapping problem in language acquisition (Pinker, S., 1982. A theory of the acquisition of lexico-interpretive grammars. In: Bresnan, J. (Ed.), The Mental Representation of Grammatical Relations. MIT Press, Cambridge, MA, pp. 655–726).
Infancy | 2001
Thomas R. Shultz; Alan Bale
A fundamental issue in cognitive science is whether human cognitive processing is better explained by symbolic rules or by subsymbolic neural networks. A recent study of infant familiarization to sentences in an artificial language seems to have produced data that can only be explained by symbolic rule learning and not by unstructured neural networks (Marcus, Vijayan, Bandi Rao, & Vishton, 1999). Here we present successful unstructured neural network simulations of the infant data, showing that these data do not uniquely support a rule-based account. In contrast to other simulations of these data, these simulations cover more aspects of the data with fewer assumptions about prior knowledge and training, using a more realistic coding scheme based on sonority of phonemes. The networks show exponential decreases in attention to a repeated sentence pattern, more recovery to novel sentences inconsistent with the familiar pattern than to novel sentences consistent with the familiar pattern, occasional familiarity preferences, more recovery to consistent novel sentences than to familiarized sentences, and extrapolative generalization outside the range of the training patterns. A variety of predictions suggest the utility of the model in guiding future psychological work. The evidence, from these and other simulations, supports the view that unstructured neural networks can account for the existing infant data.
Linguistic Inquiry | 2014
Alan Bale; Jessica Coon
In languages with numeral classifier systems, nouns must generally appear with one of a series of classifiers in order to be modified by a numeral. This squib presents new data from Mi’gmaq (Algonquian) and Chol (Mayan), arguing that numeral classifiers are required because of the syntactic and semantic properties of the numeral (as in Krifka 1995), rather than the noun (as in Chierchia 1998). The results are shown to have important consequences for the mass/count distinction. Mandarin Chinese is a frequently cited example of a language with numeral classifiers. As shown in (1), classifiers cannot be dropped in the presence of numerals.
Journal of Semantics | 2014
Lara Hochstein; Alan Bale; Danny Fox; David Barner
Journal of Semantics Advance Access published December 4, 2014 Journal of Semantics, 0, 2014: 1–29 doi:10.1093/jos/ffu015 LARA HOCHSTEIN University of California, San Diego ALAN BALE Concordia University DANNY FOX The Hebrew University of Jerusalem; Massachusetts Institute of Technology DAVID BARNER University of California, San Diego Abstract Unlike adults, children as old as 9 years of age often fail to infer that a sentence like, ‘Some of the children slept’ implies the falsity of its stronger alternative, ‘All of the children slept’—an inference referred to as a ‘scalar implicature’. Several explanations have been proposed to account for children’s failures with scalar implicature, including domain-general processing limitations, pragmatic deficits or an inability to access the relevant alternatives in a lexical scale (e.g. all as an alternative to some). Our study focused on the role of Gricean epistemic reasoning in children’s failures by testing their ability to compute ‘ignorance implicatures’, which require reasoning about speaker knowledge and informativeness but which differ from scalar implicature with respect to the alternative statements that are involved. We administered two matched tasks to 4- and 5-year-old children: one that assessed their ability to compute ignorance implicatures, and another that assessed their ability to compute scalar implicatures. Five-year-olds successfully computed ignorance implicatures despite failing to compute scalar implicatures, while 4-year-olds failed at both types of inference. These results suggest that 5-year-olds are able to reason about speaker knowledge and informative- ness, and thus that it is difficult to explain their deficit with scalar implicature via these factors. We speculate about other possible sources of their difficulties, including pro- cessing limits and children’s access to the specific scalar alternatives required by scalar implicature. s The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: [email protected] Downloaded from http://jos.oxfordjournals.org/ at University of California, San Diego on December 5, 2014 Ignorance and Inference: Do Problems with Gricean Epistemic Reasoning Explain Children’s Difficulty with Scalar Implicature?
Minds and Machines | 2006
Thomas R. Shultz; Alan Bale
Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359–382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results.
Nordlyd | 2014
Jessica Coon; Alan Bale
This paper presents puzzles concerning the representation of features in the agreement system of the Eastern Algonquian language, Mi’gmaq. A growing body of research converges on the idea that φ-agreement should be separated into distinct person (π 0 ), number (# 0 ), and sometimes gender (γ 0 ) probes (e.g. Anagnostopoulou 2003, Bejar 2003, Bejar and Rezac 2003, Laka 1993, Shlonsky 1989, Sigurðsson 1996, Sigurðsson and Holmberg 2008, Preminger 2012). While these proposals account well for agreement and partial agreement patterns in a number of languages, we show that in order to account for the agreement system of Mi’gmaq, π 0 and # 0 must probe together, which we argue to be the result of fusion of two distinct probes. We discuss the implications of Mi’gmaq agreement for “prominence hierarchies” and feature geometries in the grammar.
Archive | 2013
Alan Bale; David Barner
Language acquisition offers a unique window into linguistic competence. As children acquire language, different aspects of competence, like knowledge of syntax, semantics, and pragmatics, emerge at different moments, allowing them to be logically dissociated. As a result, developmental data can be used to decide between competing linguistic models that posit different structures, but nonetheless make similar predictions regarding mature competence. It is this premise that has guided a recent surge in the study of pragmatic development. If children are delayed in their ability to make pragmatic inferences then researchers might not only distinguish semantic and pragmatic sources of meaning, but they might also be able to decompose pragmatic reasoning into its component parts.
Language Learning and Development | 2018
Lara Hochstein; Alan Bale; David Barner
ABSTRACT We investigated “scalar implicature” in adolescents and children with autism spectrum disorder (ASD) to test whether theory of mind deficits associated with autism affect pragmatic inferences in language. We tested scalar implicature computation in adolescents with ASD (12–18 years) and asked whether they reason about mental states when computing inferences. Like previous studies, we found the adolescents with ASD computed implicatures to the same degree as neurotypical adults. However, we also found that this ability may not rely on epistemic reasoning. In a test of epistemic reasoning (which probed so-called “ignorance implicature”) we found that adolescents with ASD were able to make epistemic inferences required by Gricean models of scalar implicature when they were explicitly required by the task to do so. However, in a second task, which asked whether subjects spontaneously reason about mental states in the service of scalar implicature when not explicitly asked to, we found that adolescents with ASD did not engage in epistemic reasoning, leading them to compute scalar implicatures in contexts in which they were not justified. Based on these data, we argue that epistemic reasoning may not be a core, constitutive component of scalar implicature.
Language Learning and Development | 2018
Jessica Sullivan; Alan Bale; David Barner
ABSTRACT Recently, researchers interested in the nature and origins of semantic representations have investigated an especially informative case study: The acquisition of the word most—a quantifier which by all accounts demands a sophisticated second-order logic, and which therefore poses an interesting challenge to theories of language acquisition. According to some reports, children acquire most as early as three years of age, suggesting that it does not draw on cardinal representations of quantity (contrary to some formal accounts), since adult-like knowledge of counting emerges later in development. Other studies, however, have provided evidence that children acquire most much later—possibly by the age of 6 or 7—thereby drawing this logic into question. Here we explore this issue by conducting a series of experiments that probed children’s knowledge of most in different ways. We conclude that children do not acquire an adult-like meaning for most until very late in development—around the age of 6—and that certain behaviors which appear consistent with earlier knowledge are better explained by children’s well-attested bias to select larger sets (a “more” bias), especially when tested with unfamiliar words.