Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dale J. Barr is active.

Publication


Featured researches published by Dale J. Barr.


Psychological Science | 2000

Taking Perspective in Conversation: The Role of Mutual Knowledge in Comprehension

Boaz Keysar; Dale J. Barr; Jennifer A. Balin; Jason S. Brauner

When people interpret language, they can reduce the ambiguity of linguistic expressions by using information about perspective: the speakers, their own, or a shared perspective. In order to investigate the mental processes that underlie such perspective taking, we tracked peoples eye movements while they were following instructions to manipulate objects. The eye fixation data in two experiments demonstrate that people do not restrict the search for referents to mutually known objects. Eye movements indicated that addressees considered objects as potential referents even when the speaker could not see those objects, requiring addressees to use mutual knowledge to correct their interpretation. Thus, people occasionally use an egocentric heuristic when they comprehend. We argue that this egocentric heuristic is successful in reducing ambiguity, though it could lead to a systematic error.


Frontiers in Psychology | 2013

Random effects structure for testing interactions in linear mixed-effects models

Dale J. Barr

In a recent paper on mixed-effects models for confirmatory analysis, Barr et al. (2013) offered the following guideline for testing interactions: “one should have by-unit [subject or item] random slopes for any interactions where all factors comprising the interaction are within-unit; if any one factor involved in the interaction is between-unit, then the random slope associated with that interaction cannot be estimated, and is not needed” (p. 275). Although this guideline is technically correct, it is inadequate for many situations, including mixed factorial designs. The following new guideline is therefore proposed: models testing interactions in designs with replications should include random slopes for the highest-order combination of within-unit factors subsumed by each interaction. Designs with replications are designs where there are multiple observations per sampling unit per cell. Psychological experiments typically involve replicated observations, because multiple stimulus items are usually presented to the same subjects within a single condition. If observations are not replicated (i.e., there is only a single observation per unit per cell), random slope variance cannot be distinguished from random error variance and thus random slopes need not be included. This new guideline implies that a model testing AB in a 2 × 2 design where A is between and B within should include a random slope for B. Likewise, a model testing all two- and three- way interactions in a 2 × 2 × 2 design where A is between and B, C are within should include random slopes for B, C, and BC. The justification for the guideline comes from the logic of mixed-model ANOVA. In an ANOVA analysis of the 2 × 2 design described above, the appropriate error term for the test of AB is MSUB, the mean squares for the unit-by-B interaction (e.g., the subjects-by-B or items-by-B interaction). For the 2 × 2 × 2 design, the appropriate error term for ABC and BC is MSUBC, the unit-by-BC interaction; for AB, it is MSUB; and for AC, it is MSUC. To what extent is this ANOVA logic applicable to tests of interactions in mixed-effects models? To address this question, Monte Carlo simulations were performed using R (R Core Team, 2013). Models were estimated using the lmer() function of lme4 (Bates et al., 2013), with p-values derived from model comparison (α = 0.05). The performance of mixed-effects models (in terms of Type I error and power) was assessed over two sets of simulations, one for each of two different mixed factorial designs. The first set focused on the test of the AB interaction in a 2 × 2 design with A between and B within; the second focused on the test of the ABC interaction in a 2 × 2 × 2 design with A between and B, C within. For simplicity all datasets included only a single source of random effect variance (e.g., by-subject but not by-item variance). The number of replications per cell was 4, 8, or 16. Predictors were coded using deviation (−0.5, 0.5) coding; identical results were obtained using treatment coding. In the rare case (~2%) that a model did not converge, it was removed from the analysis. Power was reported with and without adjustment for Type I error rate, using the adjustment method reported in Barr et al. (2013). For each set of simulations at each of the three replication levels, 10,000 datasets were randomly generated, each with 24 sampled units (e.g., subjects). The dependent variable was continuous and normally distributed, with all data-generating parameters drawn from uniform distributions. Fixed effects were either between −2 and −1 or between 1 and 2 (with equal probability). The error variance was fixed at 6, and the random effects variance/covariance matrix had variances ranging from 0 to 3 and covariances corresponding to correlations ranging from −0.9 to 0.9. For the 2 × 2 design, mixed-effects models with two different random effects structures were fit to the data: (1) by-unit random intercept but no random slope for B (“RI”), and (2) a maximal model including a slope for B in addition to the random intercept (“Max”). For comparison purposes, a test of the interaction using mixed-model ANOVA (“AOV”) was performed using Rs aov() function. Results for the test of the AB interaction in the 2 × 2 design are in Tables ​Tables11 and ​and2.2. As expected, the Type I error rate for ANOVA and maximal models were very close to the stated α-level of 0.05. In contrast, models lacking the random slope for B (“RI”) showed unacceptably high Type I error rates, increasing with the number of replications. Adjusted power was comparable for all three types of analyses (Table ​(Table2),2), albeit with a slight overall advantage for RI. Table 1 Type I error rate for the test of AB in the 2 × 2 design. Table 2 Power for the test of AB in the 2 × 2 design, Adjusted (Raw) p-values. The test of the ABC interaction in the 2 × 2 design was evaluated under four different random effects structures, all including a random intercept but varying in which random slopes were included. The models were: (1) random intercept only (“RI”); (2) slopes for B and C but not for BC (“nBC”); (3) slope for BC but not for B nor C (“BC”); and (4) maximal (slopes for B, C, and BC; “Max”). For the test of the ABC interaction, ANOVA and maximal models both yielded acceptable Type I performance (Table ​(Table3);3); the model with the BC slope alone (“BC”) was comparably good. However, the model excluding the BC slope had unacceptably high Type I error rates; surprisingly, omitting this random slope may be even worse than a random-intercept-only model. Adjusted power was comparable across all analyses (Table ​(Table44). Table 3 Type I error rate for test of ABC in 2 × 2 × 2 design. Table 4 Power for test of ABC in 2 × 2 × 2 design, Adjusted (Raw) p-values. To summarize: when testing interactions in mixed designs with replications, it is critical to include the random slope corresponding to the highest-order combination of within-subject factors subsumed by each interaction of interest. It is just as important to attend to this guideline when one seeks to simplify a non-converging model as when one is deciding on what structure to fit in the first place. Failing to include the critical slope in the test of an interaction can yield unacceptably high Type I error rates. Indeed, a model that includes all relevant random slopes except for the single critical slope may perform just as badly as (or possibly even worse than) a random-intercepts-only model, even though such a model is nearly maximal. Finally, note that including only the critical random slope in the model was sufficient to obtain acceptable performance, as illustrated by the “BC” model in the 2 × 2 × 2 design. Although the current simulations only considered interactions between categorical variables, the guideline applies whenever there are replicated observations, regardless of what types of variables are involved in an interaction (e.g., continuous only, or a mix of categorical and continuous). For example, consider a design with two independent groups of subjects, where there are observations at multiple time points for each subject. When testing the time-by-group interaction, the model should include a random slope for the continuous variable of time; if time is modeled using multiple terms of a polynomial, then there should be a slope for each of the terms in the polynomial that interact with group. For instance, if the effect of time is modeled as Y = β0 + β1 t + β2 t2 and the interest is in whether the β0 and β1 parameters vary across group, then the random effects structure should include slopes for both the group-by-t and group-by-t2 interactions.


Handbook of Psycholinguistics (Second Edition) | 2006

Perspective Taking and the Coordination of Meaning in Language Use

Dale J. Barr; Boaz Keysar

Publisher Summary Language use requires coordination to be successful, because language can be ambiguous. Thus, speakers and listeners will often have to adapt their behavior to their interlocutor to avoid misunderstanding. This chapter discusses audience design hypothesis, or the design hypothesis for short. This hypothesis assumes that speakers and listeners achieve success in communication because they maintain detailed models of what the other person knows, and speak and understand against these models. Speakers adapt some aspects of their speech to characteristics of their listeners. Adults often speak to small children using motherese, a form of speech containing exaggerated prosody, a higher pitch, and simplified syntax. Speakers change their speaking register or style depending upon the social identity of their addressees. Casual observation reveals that bilinguals often mix languages when talking to other bilinguals, but tend to use a single language when speaking to monolinguals. And developmental psychologists such as Piaget have observed that the speech of children becomes less egocentric and more listener-centered as they mature.


Language and Cognitive Processes | 2010

The role of fillers in listener attributions for speaker disfluency

Dale J. Barr; Mandana Seyfeddinipur

When listeners hear a speaker become disfluent, they expect the speaker to refer to something new. What is the mechanism underlying this expectation? In a mouse-tracking experiment, listeners sought to identify images that a speaker was describing. Listeners more strongly expected new referents when they heard a speaker say um than when they heard a matched utterance where the um was replaced by noise. This expectation was speaker-specific: it depended on what was new and old for the current speaker, not just on what was new or old for the listener. This finding suggests that listeners treat fillers as collateral signals.


Psychonomic Bulletin & Review | 2003

Paralinguistic correlates of conceptual structure.

Dale J. Barr

How is conceptual knowledge transmitted during conversation? When a speaker refers to an object, the name that the speaker chooses conveys information about categoryidentity. In addition, I propose that a speaker’s confidence in a classification can convey information about categorystructure. Because atypical instances of a category are more difficult to classify than typical instances, when speakers refer to these instances their lack of confidence will manifest itself “paralinguistically”—that is, in the form of hesitations, filled pauses, or rising prosody. These features can help listeners learn by enabling them to differentiate good from bad examples of a category. So that this hypothesis could be evaluated, in a category learning experiment participants learned a set of novel colors from a speaker. When the speaker’s paralinguistically expressed confidence was consistent with the underlying category structure, learners acquired the categories more rapidly and showed better category differentiation from the earliest moments of learning. These findings have important implications for theories of conversational coordination and language learning.


Frontiers in Human Neuroscience | 2013

How culture influences perspective taking: differences in correction, not integration

Shali Wu; Dale J. Barr; Timothy M. Gann; Boaz Keysar

Individuals from East Asian (Chinese) backgrounds have been shown to exhibit greater sensitivity to a speaker’s perspective than Western (U.S.) participants when resolving referentially ambiguous expressions. We show that this cultural difference does not reflect better integration of social information during language processing, but rather is the result of differential correction: in the earliest moments of referential processing, Chinese participants showed equivalent egocentric interference to Westerners, but managed to suppress the interference earlier and more effectively. A time-series analysis of visual-world eye-tracking data found that the two cultural groups diverged extremely late in processing, between 600 and 1400 ms after the onset of egocentric interference. We suggest that the early moments of referential processing reflect the operation of a universal stratum of processing that provides rapid ambiguity resolution at the cost of accuracy and flexibility. Late components, in contrast, reflect the mapping of outputs from referential processes to decision-making and action planning systems, allowing for a flexibility in responding that is molded by culturally specific demands.


Proceedings of the Royal Society of London B: Biological Sciences | 2014

Cultural selection drives the evolution of human communication systems

Monica Tamariz; T. Mark Ellison; Dale J. Barr; Nicolas Fay

Human communication systems evolve culturally, but the evolutionary mechanisms that drive this evolution are not well understood. Against a baseline that communication variants spread in a population following neutral evolutionary dynamics (also known as drift models), we tested the role of two cultural selection models: coordination- and content-biased. We constructed a parametrized mixed probabilistic model of the spread of communicative variants in four 8-person laboratory micro-societies engaged in a simple communication game. We found that selectionist models, working in combination, explain the majority of the empirical data. The best-fitting parameter setting includes an egocentric bias and a content bias, suggesting that participants retained their own previously used communicative variants unless they encountered a superior (content-biased) variant, in which case it was adopted. This novel pattern of results suggests that (i) a theory of the cultural evolution of human communication systems must integrate selectionist models and (ii) human communication systems are functionally adaptive complex systems.


Journal of Experimental Psychology: General | 2014

Using a voice to put a name to a face: the psycholinguistics of proper name comprehension.

Dale J. Barr; Laura Jackson; Isobel Phillips

We propose that hearing a proper name (e.g., Kevin) in a particular voice serves as a compound memory cue that directly activates representations of a mutually known target person, often permitting reference resolution without any complex computation of shared knowledge. In a referential communication study, pairs of friends played a communication game, in which we monitored the eyes of one friend (the addressee) while he or she sought to identify the target person, in a set of four photos, on the basis of a name spoken aloud. When the name was spoken by a friend, addressees rapidly identified the target person, and this facilitation was independent of whether the friend was articulating a message he or she had designed versus one from a third party with whom the target person was not shared. Our findings suggest that the comprehension system takes advantage of regularities in the environment to minimize effortful computation about who knows what.


bioRxiv | 2017

Limits on prediction in language comprehension: A multi-lab failure to replicate evidence for probabilistic pre-activation of phonology

Mante S. Nieuwland; Stephen Politzer-Ahles; Evelien Heyselaar; Katrien Segaert; Emily Darley; Nina Kazanina; Sarah Von Grebmer Zu Wolfsthurn; Federica Bartolozzi; Vita Kogan; Aine Ito; Diane Mézière; Dale J. Barr; Guillaume A. Rousselet; Heather J. Ferguson; Simon Busch-Moreno; Xiao Fu; Jyrki Tuomainen; Eugenia Kulakova; E. Matthew Husband; David L. Donaldson; Zdenko Kohút; Shirley-Ann Rueschemeyer; Falk Huettig

In current theories of language comprehension, people routinely and implicitly predict upcoming words by pre-activating their meaning, morpho-syntactic features and even their specific phonological form. To date the strongest evidence for this latter form of linguistic prediction comes from a 2005 Nature Neuroscience landmark publication by DeLong, Urbach and Kutas, who observed a graded modulation of article- and noun-elicited electrical brain potentials (N400) by the pre-determined probability that people continue a sentence fragment with that word (‘cloze’). In a direct replication study spanning 9 laboratories (N=334), we failed to replicate the crucial article-elicited N400 modulation by cloze, while we successfully replicated the commonly-reported noun-elicited N400 modulation. This pattern of failure and success was observed in a pre-registered replication analysis, a pre-registered single-trial analysis, and in exploratory Bayesian analyses. Our findings do not support a strong prediction view in which people routinely pre-activate the phonological form of upcoming words, and suggest a more limited role for prediction during language comprehension.


Brain and Language | 2017

Eye-tracking the time-course of novel word learning and lexical competition in adults and children.

Anna Weighall; Lisa-Marie Henderson; Dale J. Barr; Scott A. Cairney; Mark Gareth Gaskell

HighlightsNewly learned spoken words can compete for recognition soon after learning.Lexical competition effects were smaller for newly learned than existing words.Explicit memory was superior for words learned the day before testing.The sleep advantage for explicit memory correlated with sleep‐spindle density.Word learning seems boosted by sleep to a greater degree in children than adults. Abstract Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method – the visual world paradigm – consistently show competition without a delay. We trained 42 adults and 40 children (aged 7–8) on novel word‐object pairings, and employed this paradigm to measure the time‐course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing “click on the biscuit”) were compared to fixations on untrained objects. Novel word‐object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing “click on the candle”), suggesting that novel items may not compete for recognition like fully‐fledged lexical items, even after 24 h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep‐spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree.

Collaboration


Dive into the Dale J. Barr's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edmundo Kronmüller

Pontifical Catholic University of Chile

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aine Ito

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge