Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel C. Richardson is active.

Publication


Featured researches published by Daniel C. Richardson.


Psychological Science | 2007

The Art of Conversation Is Coordination Common Ground and the Coupling of Eye Movements During Dialogue

Daniel C. Richardson; Rick Dale; Natasha Z. Kirkham

When two people discuss something they can see in front of them, what is the relationship between their eye movements? We recorded the gaze of pairs of subjects engaged in live, spontaneous dialogue. Cross-recurrence analysis revealed a coupling between the eye movements of the two conversants. In the first study, we found their eye movements were coupled across several seconds. In the second, we found that this coupling increased if they both heard the same background information prior to their conversation. These results provide a direct quantification of joint attention during unscripted conversation and show that it is influenced by knowledge in the common ground.


Topics in Cognitive Science | 2009

Conversation and Coordinative Structures

Kevin Shockley; Daniel C. Richardson; Rick Dale

People coordinate body postures and gaze patterns during conversation. We review literature showing that (1) action embodies cognition, (2) postural coordination emerges spontaneously when two people converse, (3) gaze patterns influence postural coordination, (4) gaze coordination is a function of common ground knowledge and visual information that conversants believe they share, and (5) gaze coordination is causally related to mutual understanding. We then consider how coordination, generally, can be understood as temporarily coupled neuromuscular components that function as a collective unit known as a coordinative structure in the motor control literature. We speculate that the coordination of gaze and body sway found in conversation may be understood as a cross-person coordinative structure that embodies the goals of the joint action system.


Journal of Experimental Psychology: General | 2004

Multimodal events and moving locations: eye movements of adults and 6-month-olds reveal dynamic spatial indexing.

Daniel C. Richardson; Natasha Z. Kirkham

The ability to keep track of locations in a dynamic, multimodal environment is crucial for successful interactions with other people and objects. The authors investigated the existence and flexibility of spatial indexing in adults and 6-month-old infants by adapting an eye-tracking paradigm from D. C. Richardson and M. J. Spivey (2000). Multimodal events were presented in specific locations, and eye movements were measured when the auditory portion of the stimulus was presented without its visual counterpart. Experiment 1 showed that adults spatially index auditory information even when the original associated locations move. Experiments 2 and 3 showed that infants are capable of both binding multimodal events to locations and tracking those locations when they move.


Developmental Psychology | 2011

Infants learn about objects from statistics and people

Rachel Wu; Alison Gopnik; Daniel C. Richardson; Natasha Z. Kirkham

In laboratory experiments, infants are sensitive to patterns of visual features that co-occur (e.g., Fiser & Aslin, 2002). Once infants learn the statistical regularities, however, what do they do with that knowledge? Moreover, which patterns do infants learn in the cluttered world outside of the laboratory? Across 4 experiments, we show that 9-month-olds use this sensitivity to make inferences about object properties. In Experiment 1, 9-month-old infants expected co-occurring visual features to remain fused (i.e., infants looked longer when co-occurring features split apart than when they stayed together). Forming such expectations can help identify integral object parts for object individuation, recognition, and categorization. In Experiment 2, we increased the task difficulty by presenting the test stimuli simultaneously with a different spatial layout from the familiarization trials to provide a more ecologically valid condition. Infants did not make similar inferences in this more distracting test condition. However, Experiment 3 showed that a social cue did allow inferences in this more difficult test condition, and Experiment 4 showed that social cues helped infants choose patterns among distractor patterns during learning as well as during test. These findings suggest that infants can use feature co-occurrence to learn about objects and that social cues shape such foundational learning in distraction-filled environments.


PSYCHOLOGY OF LEARNING AND MOTIVATION, VOL 59 , 59 pp. 43-95. (2013) | 2013

The Self-Organization of Human Interaction

Rick Dale; Riccardo Fusaroli; Nicholas D. Duran; Daniel C. Richardson

We describe a “centipede’s dilemma” that faces the sciences of human interaction. Research on human interaction has been involved in extensive theoretical debate, although the vast majority of research tends to focus on a small set of human behaviors, cognitive processes, and interactive contexts. The problem is that naturalistic human interaction must integrate all of these factors simultaneously, and grander theoretical mitigation cannot come only from focused experimental or computational agendas. We look to dynamical systems theory as a framework for thinking about how these multiple behaviors, processes, and contexts can be integrated into a broader account of human interaction. By introducing and utilizing basic concepts of self-organization and synergy, we review empirical work that shows how human interaction is flexible and adaptive and structures itself incrementally during unfolding interactive tasks, such as conversation, or more focused goal-based contexts. We end on acknowledging that dynamical systems accounts are very short on concrete models, and we briefly describe ways that theoretical frameworks could be integrated, rather than endlessly disputed, to achieve some success on the centipede’s dilemma of human interaction.


Neuropsychologia | 2010

Intact imitation of emotional facial actions in autism spectrum conditions.

Clare Press; Daniel C. Richardson; Geoffrey Bird

It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another persons perspective, such as theory of mind, may be impaired. However, evidence of MNS impairment in ASC is mixed. The present study used an ‘automatic imitation’ paradigm to assess MNS functioning in adults with ASC and matched controls, when observing emotional facial actions. Participants performed a pre-specified angry or surprised facial action in response to observed angry or surprised facial actions, and the speed of their action was measured with motion tracking equipment. Both the ASC and control groups demonstrated automatic imitation of the facial actions, such that responding was faster when they acted with the same emotional expression that they had observed. There was no difference between the two groups in the magnitude of the effect. These findings suggest that previous apparent demonstrations of impairments to the MNS in ASC may be driven by a lack of visual attention to the stimuli or motor sequencing impairments, and therefore that there is, in fact, no MNS impairment in ASC. We discuss these findings with reference to the literature on MNS functioning and imitation in ASC, as well as theories of the role of the MNS in sociocognitive functioning in typical development.


Journal of Autism and Developmental Disorders | 2011

The Role of Alexithymia in Reduced Eye-Fixation in Autism Spectrum Conditions

Geoffrey Bird; Clare Press; Daniel C. Richardson

Eye-tracking studies have demonstrated mixed support for reduced eye fixation when looking at social scenes in individuals with Autism Spectrum Conditions (ASC). We present evidence that these mixed findings are due to a separate condition—alexithymia—that is frequently comorbid with ASC. We find that in adults with ASC, autism symptom severity correlated negatively with attention to faces when watching video clips. However, only the degree of alexithymia, and not autism symptom severity, predicted eye fixation. As well as potentially resolving the contradictory evidence in this area, these findings suggest that individuals with ASC and alexithymia may form a sub-group of individuals with ASC, with emotional impairments in addition to the social impairments characteristic of ASC.


Trends in Cognitive Sciences | 2009

Much ado about eye movements to nothing: a response to Ferreira et al.: Taking a new look at looking at nothing

Daniel C. Richardson; Gerry T. M. Altmann; Michael J. Spivey; Merrit A. Hoover

Ferreira et al.[1] outline an ‘integrated representation theory’ of the ‘looking at nothing’ phenomenon that we have previously documented [2–9]. We largely agree with the explanation by Ferreira et al. because we have argued for the same mechanisms ourselves in prior publications. Their claim to novelty rests upon a misrepresentation of our views (see Box 1). Here, we discuss the one novel claim that Ferreira et al. do make, concerning the consequence of looking at nothing, rather than the mechanism by which the looking is initiated.


Psychological Science | 2008

Where Do We Look During Potentially Offensive Behavior

Jennifer Randall Crosby; Benoît Monin; Daniel C. Richardson

Imagine (or remember) being the only member of a particular social group in the room. Someone makes a questionable remark about your group, and all eyes turn to you. If you have ever experienced this, you know that it is doubly unpleasant. Not only has your social group been besmirched, but also you have suddenly become the center of unwelcome attention. In the experiment reported here, we used eye movement recordings to investigate this phenomenon from the perspective of the people looking at the offended bystander. Our findings point toward the function of this behavior, and reveal the surprising depth of cognitive processing that is engaged by social interaction. One explanation for this attention is that people are practicing social referencing—seeking out the responses of a potentially victimized groupmember to help them assess the situation (Crosby, 2006). Because of their personal experiencewith prejudice (Essed, 1992), minority-group members may be seen as experts on prejudice (Swim,Cohen,&Hyers, 1998) andmay also be seen as experts in the area of morality (Vorauer, 2006). In fact, minority-group members may have more influence than majority-group members over judgments of discrimination (Crosby & Monin, 2008). Given these findings, the responses of minority-group members may be informative as people assess controversial comments. A simpler, alternative hypothesis is that members of relevant groups are looked at simply because of low-level associations; hearing ‘‘the economy is in the red,’’ one might look at someone wearing red. Eye movement studies often reveal such effects, in which words or parts of words trigger looks to potential referents (Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995), even when the referents have been removed and the locations are empty (Richardson& Spivey, 2000). This alternative association hypothesis predicts that the mere mention of a social group will lead people to look at a member of that group, regardless of whether the group member might provide useful information. We tested the association hypothesis by emulating potentially offensive behavior in the lab. Four males (three White and one Black) discussed university admissions. One of the White discussants criticized affirmative action, andwemanipulatedwhether or not participants believed the Black discussant heard what was said. Whereas the social-referencing hypothesis suggests that he would be fixated only if he could have an informative reaction, the association hypothesis predicts that he would be fixated regardless.


Frontiers in Psychology | 2011

The Dynamics of Reference and Shared Visual Attention

Rick Dale; Natasha Z. Kirkham; Daniel C. Richardson

In the tangram task, two participants are presented with the same set of abstract shapes portrayed in different orders. One participant must instruct the other to arrange their shapes so that the orders match. To do this, they must find a way to refer to the abstract shapes. In the current experiment, the eye movements of pairs of participants were tracked while they were engaged in a computerized version of the task. Results revealed the canonical tangram effect: participants became faster at completing the task from round 1 to round 3. Also, their eye-movements synchronized over time. Cross-recurrence analysis was used to quantify this coordination, and showed that as participants’ words coalesced, their actions approximated a single coordinated system.

Collaboration


Dive into the Daniel C. Richardson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rick Dale

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris N. H. Street

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge