Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph L. Austerweil is active.

Publication


Featured researches published by Joseph L. Austerweil.


language and technology conference | 2006

Multilevel Coarse-to-Fine PCFG Parsing

Eugene Charniak; Mark Johnson; Micha Elsner; Joseph L. Austerweil; David Ellis; Isaac Haxton; Catherine Hill; R. Shrivaths; Jeremy Moore; Michael Pozar; Theresa Vu

We present a PCFG parsing algorithm that uses a multilevel coarse-to-fine (mlctf) scheme to improve the efficiency of search for the best parse. Our approach requires the user to specify a sequence of nested partitions or equivalence classes of the PCFG nonterminals. We define a sequence of PCFGs corresponding to each partition, where the nonterminals of each PCFG are clusters of nonterminals of the original source PCFG. We use the results of parsing at a coarser level (i.e., grammar defined in terms of a coarser partition) to prune the next finer level. We present experiments showing that with our algorithm the work load (as measured by the total number of constituents processed) is decreased by a factor of ten with no decrease in parsing accuracy compared to standard CKY parsing with the original PCFG. We suggest that the search space over mlctf algorithms is almost totally unexplored so that future work should be able to improve significantly on these results.


Psychological Review | 2015

Random Walks on Semantic Networks Can Resemble Optimal Foraging

Joshua T. Abbott; Joseph L. Austerweil; Thomas L. Griffiths

When people are asked to retrieve members of a category from memory, clusters of semantically related items tend to be retrieved together. A recent article by Hills, Jones, and Todd (2012) argued that this pattern reflects a process similar to optimal strategies for foraging for food in patchy spatial environments, with an individual making a strategic decision to switch away from a cluster of related information as it becomes depleted. We demonstrate that similar behavioral phenomena also emerge from a random walk on a semantic network derived from human word-association data. Random walks provide an alternative account of how people search their memories, postulating an undirected rather than a strategic search process. We show that results resembling optimal foraging are produced by random walks when related items are close together in the semantic network. These findings are reminiscent of arguments from the debate on mental imagery, showing how different processes can produce similar results when operating on different representations.


Cognitive Science | 2011

Seeking confirmation is rational for deterministic hypotheses

Joseph L. Austerweil; Thomas L. Griffiths

The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the best-known biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the probability of falsifying the current hypothesis. This analysis rests on two assumptions: (a) that people predict the next event in a sequence in a way that is consistent with Bayesian inference; and (b) when testing hypotheses, people test the hypothesis to which they assign highest posterior probability. We present four behavioral experiments that support these assumptions, showing that a simple Bayesian model can capture people’s predictions about numerical sequences (Experiments 1 and 2), and that we can alter the hypotheses that people choose to test by manipulating the prior probability of those hypotheses (Experiments 3 and 4).


Cognition | 2015

Analyzing the history of Cognition using Topic Models.

Uriel Cohen Priva; Joseph L. Austerweil

Very few articles have analyzed how cognitive science as a field has changed over the last six decades. We explore how Cognition changed over the last four decades using Topic Models. Topic Models assume that every word in every document is generated by one of a limited number of topics. Words that are likely to co-occur are likely to be generated by a single topic. We find a number of significant historical trends: the rise of moral cognition, eyetracking methods, and action, the fall of sentence processing, and the stability of development. We introduce the notion of framing topics, which frame content, rather than present the content itself. These framing topics suggest that over time Cognition turned from abstract theorizing to more experimental approaches.


Psychological Review | 2013

A nonparametric bayesian framework for constructing flexible feature representations

Joseph L. Austerweil; Thomas L. Griffiths

Representations are a key explanatory device used by cognitive psychologists to account for human behavior. Understanding the effects of context and experience on the representations people use is essential, because if two people encode the same stimulus using different representations, their response to that stimulus may be different. We present a computational framework that can be used to define models that flexibly construct feature representations (where by a feature we mean a part of the image of an object) for a set of observed objects, based on nonparametric Bayesian statistics. Austerweil and Griffiths (2011) presented an initial model constructed in this framework that captures how the distribution of parts affects the features people use to represent a set of objects. We build on this work in three ways. First, although people use features that can be transformed on each observation (e.g., translate on the retinal image), many existing feature learning models can only recognize features that are not transformed (occur identically each time). Consequently, we extend the initial model to infer features that are invariant over a set of transformations, and learn different structures of dependence between feature transformations. Second, we compare two possible methods for capturing the manner that categorization affects feature representations. Finally, we present a model that learns features incrementally, capturing an effect of the order of object presentation on the features people learn. We conclude by considering the implications and limitations of our empirical and theoretical results.


PLOS ONE | 2016

The Sapir-Whorf Hypothesis and Probabilistic Inference: Evidence from the Domain of Color

Emily Cibelli; Yang Xu; Joseph L. Austerweil; Thomas L. Griffiths; Terry Regier

The Sapir-Whorf hypothesis holds that our thoughts are shaped by our native language, and that speakers of different languages therefore think differently. This hypothesis is controversial in part because it appears to deny the possibility of a universal groundwork for human cognition, and in part because some findings taken to support it have not reliably replicated. We argue that considering this hypothesis through the lens of probabilistic inference has the potential to resolve both issues, at least with respect to certain prominent findings in the domain of color cognition. We explore a probabilistic model that is grounded in a presumed universal perceptual color space and in language-specific categories over that space. The model predicts that categories will most clearly affect color memory when perceptual information is uncertain. In line with earlier studies, we show that this model accounts for language-consistent biases in color reconstruction from memory in English speakers, modulated by uncertainty. We also show, to our knowledge for the first time, that such a model accounts for influential existing data on cross-language differences in color discrimination from memory, both within and across categories. We suggest that these ideas may help to clarify the debate over the Sapir-Whorf hypothesis.


Cognitive Psychology | 2011

A rational model of the effects of distributional information on feature learning

Joseph L. Austerweil; Thomas L. Griffiths

Most psychological theories treat the features of objects as being fixed and immediately available to observers. However, novel objects have an infinite array of properties that could potentially be encoded as features, raising the question of how people learn which features to use in representing those objects. We focus on the effects of distributional information on feature learning, considering how a rational agent should use statistical information about the properties of objects in identifying features. Inspired by previous behavioral results on human feature learning, we present an ideal observer model based on nonparametric Bayesian statistics. This model balances the idea that objects have potentially infinitely many features with the goal of using a relatively small number of features to represent any finite set of objects. We then explore the predictions of this ideal observer model. In particular, we investigate whether people are sensitive to how parts co-vary over objects they observe. In a series of four behavioral experiments (three using visual stimuli, one using conceptual stimuli), we demonstrate that people infer different features to represent the same four objects depending on the distribution of parts over the objects they observe. Additionally in all four experiments, the features people infer have consequences for how they generalize properties to novel objects. We also show that simple models that use the raw sensory data as inputs and standard dimensionality reduction techniques (principal component analysis and independent component analysis) are insufficient to explain our results.


Attention Perception & Psychophysics | 2010

Vertical position as a cue to pictorial depth: Height in the picture plane versus distance to the horizon

Jonathan S. Gardner; Joseph L. Austerweil; Stephen E. Palmer

Two often cited but frequently confused pictorial cues to perceived depth are height in the picture plane (HPP) and distance to the horizon (DH). We report two psychophysical experiments that disentangled their influence on perception of relative depth in pictures of the interior of a schematic room. Experiment 1 showed that when HPP and DH varied independently with both a ceiling and a floor plane visible in the picture, DH alone determined judgments of relative depth; HPP was irrelevant. Experiment 2 studied relative depth perception in single-plane displays (floor only or ceiling only) in which the horizon either was not visible or was always at the midpoint of the target object. When the target object was viewed against either a floor or a ceiling plane, some observers used DH, but others (erroneously) used HPP. In general, when DH is defined and unambiguous, observers use it to determine the relative distance to objects, but when DH is undefined and/or ambiguous, at least some observers use HPP.


Archive | 2017

Networks of Social and Moral Norms in Human and Robot Agents

Bertram F. Malle; Matthias Scheutz; Joseph L. Austerweil

The most intriguing and ethically challenging roles of robots in society are those of collaborator and social partner. We propose that such robots must have the capacity to learn, represent, activate, and apply social and moral norms—they must have a norm capacity. We offer a theoretical analysis of two parallel questions: what constitutes this norm capacity in humans and how might we implement it in robots? We propose that the human norm system has four properties: flexible learning despite a general logical format, structured representations, context-sensitive activation, and continuous updating. We explore two possible models that describe how norms are cognitively represented and activated in context-specific ways and draw implications for robotic architectures that would implement either model.


Autism | 2015

Contradictory “heuristic” theories of autism spectrum disorders: The case for theoretical precision using computational models

Joseph L. Austerweil

Davis and Plaisted-Grant (2014) argue persuasively that reduced neural noise is consistent with most behavioral and physiological differences between autism spectrum disorder (ASD) and non-ASD individuals. This is surprising given that the same empirical results were previously used to support the opposite point of view—increased neural noise is the primary cause of ASD (Rubenstein and Merzenich, 2003). When two incompatible theories explain the same results, the field needs to revisit its framework for defining and evaluating theories. Formalizing theories using computational models can clarify a theory’s predictions and distinguish it from theories of related disorders (e.g. Sikström and Söderlund, 2007, who instantiated a formal model of hypersensitivity in individuals with attention deficit hyperactivity disorder as reduced neural noise). Davis and Plaisted-Grant provide a valuable distinction between internal and external noise in theories of ASD. However, it is unclear how much more progress can be made from heuristic theories. They make clear, interesting predictions for a homunculus acting based on a single noisy neuron (e.g. cases where motion coherence should be intact for ASD individuals), yet it is unclear how these heuristic theories scale beyond a single neuron. For example, how does having less (or more) neural noise at every level of a hierarchy of neurons influence behavior? If only a subset of neurons has less (or more) neural noise, how should that influence behavior? As a step toward resolving the current disarray of results and theories, I briefly motivate and sketch a framework for developing computational models of ASD integrated over levels of analysis. Formalizing theories is not merely a mathematical exercise; the resulting models explain counterintuitive behaviors that are difficult to understand from introspection. For example, computational formalizations of Parkinson’s disease explain a previously inexplicable result: dopamine simultaneously impairs and aids learning in Parkinson’s patients (Frank et al., 2004). Although multiple computational formalisms should be explored simultaneously, the Bayesian framework is particularly well-suited for formalizing ASD theories (Pellicano and Burr, 2012). It is based on Bayes’ rule, which describes how agents should update their prior belief about something after observing evidence. Previously, Pellicano and Burr (2012) argued that Bayesian models support a topdown ASD theory (weaker expectations), rather than a bottom-up theory (enhanced senses). However, Bayesian models are a framework or language for formalizing and evaluating theories. Thus, the Bayesian framework is equally compatible with top-down and bottom-up theories. Top-down theories predict ASD and non-ASD individuals have different priors, whereas bottom-up theories predict different likelihoods. If differences between ASD and nonASD individuals can only be captured by having top-down (different priors) and bottom-up (different likelihoods) for each group, then the resulting model would in fact be a combination of both theories. Discerning the relative influence of top-down and bottom-up contributions is advantageous because many researchers believe that only a combination of top-down and bottom-up theories can explain the many differences between ASD and non-ASD individuals. Once researchers formalize theories in formal models, the models can be used to design experiments that test the relevant components of the theories. For most Bayesian models, the influence of the prior and likelihood can be dissociated using specially designed behavioral experiments. Intuitively, this is because within a given trial, the likelihood term applies multiple times (once for each observation in the trial), whereas the prior only applies once (regardless of the number of observations). In fact, this is precisely why Bayesian models have been so influential over the last decade: They provide a principled framework for identifying Contradictory “heuristic” theories of autism spectrum disorders: The case for theoretical precision using computational models

Collaboration


Dive into the Joseph L. Austerweil's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey C. Zemla

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emily Cibelli

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge