Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin N. Wood is active.

Publication


Featured researches published by Justin N. Wood.


Journal of Experimental Psychology: General | 2007

Visual Working Memory for Observed Actions.

Justin N. Wood

Human society depends on the ability to remember the actions of other individuals, which is information that must be stored in a temporary buffer to guide behavior after actions have been observed. To date, however, the storage capacity, contents, and architecture of working memory for observed actions are unknown. In this article, the author shows that it is possible to retain information about only 2-3 actions in visual working memory at once. However, it is also possible to retain 9 properties distributed across 3 actions almost as well as 3 properties distributed across 3 actions, showing that working memory stores integrated action representations rather than individual properties. Finally, the author shows that working memory for observed actions is independent from working memory for object and spatial information. These results provide evidence for a previously undocumented system in working memory for storing information about actions. Further, this system operates by the same storage principles as visual working memory for object information. Thus, working memory consists of a series of distinct yet computationally similar mechanisms for retaining different types of visual information.


Trends in Cognitive Sciences | 2008

Action comprehension in non-human primates: motor simulation or inferential reasoning?

Justin N. Wood; Marc D. Hauser

Some argue that action comprehension is intimately connected with the observers own motor capacities, whereas others argue that action comprehension depends on non-motor inferential mechanisms. We address this debate by reviewing comparative studies that license four conclusions: monkeys and apes extract the meaning of an action (i) by going beyond the surface properties of actions, attributing goals and intentions to the agent; (ii) by using environmental information to infer when actions are rational; (iii) by making predictions about an agents goal, and the most probable action to obtain the goal given environmental constraints; (iv) in situations in which they are physiologically incapable of producing the actions. Motor theories are, thus, insufficient to account for primate action comprehension in the absence of inferential mechanisms.


Journal of Cognitive Neuroscience | 2011

Spatial attention determines the nature of nonverbal number representation

Daniel C. Hyde; Justin N. Wood

Coordinated studies of adults, infants, and nonhuman animals provide evidence for two systems of nonverbal number representation: a “parallel individuation” system that represents individual items and a “numerical magnitude” system that represents the approximate cardinal value of a group. However, there is considerable debate about the nature and functions of these systems, due largely to the fact that some studies show a dissociation between small (1–3) and large (>3) number representation, whereas others do not. Using event-related potentials, we show that it is possible to determine which system will represent the numerical value of a small number set (1–3 items) by manipulating spatial attention. Specifically, when attention can select individual objects, an early brain response (N1) scales with the cardinal value of the display, the signature of parallel individuation. In contrast, when attention cannot select individual objects or is occupied by another task, a later brain response (P2p) scales with ratio, the signature of the approximate numerical magnitude system. These results provide neural evidence that small numbers can be represented as approximate numerical magnitudes. Further, they empirically demonstrate the importance of early attentional processes to number representation by showing that the way in which attention disperses across a scene determines which numerical system will deploy in a given context.


Animal Cognition | 2007

When quantity trumps number: discrimination experiments in cotton-top tamarins (Saguinus oedipus) and common marmosets (Callithrix jacchus)

Jeffrey R. Stevens; Justin N. Wood; Marc D. Hauser

The capacity for non-linguistic, numerical discrimination has been well characterized in non-human animals, with recent studies providing careful controls for non-numerical confounds such as continuous extent, density, and quantity. More poorly understood are the conditions under which animals use numerical versus non-numerical quantification, and the nature of the relation between these two systems. Here we test whether cotton-top tamarins and common marmosets can discriminate between two quantities on the basis of the amount of food rather than on number. In three experiments, we show that when choosing between arrays containing different numbers and sizes of food objects, both species based their decisions on the amount of food with only minor influences of numerical information. Further, we find that subjects successfully discriminated between two quantities differing by a 2:3 or greater ratio, which is consistent with the ratio limits found for numerical discrimination with this species. These studies demonstrate that non-human primates possess mechanisms that enable quantification of total amount, in addition to the numerical representations demonstrated in previous studies, with both types of quantification subject to similar processing limits.


Cognition | 2008

Evidence for a non-linguistic distinction between singular and plural sets in rhesus monkeys

David Barner; Justin N. Wood; Marc D. Hauser; Susan Carey

Set representations are explicitly expressed in natural language. For example, many languages distinguish between sets and subsets (all vs. some), as well as between singular and plural sets (a cat vs. some cats). Three experiments explored the hypothesis that these representations are language specific, and thus absent from the conceptual resources of non-linguistic animals. We found that rhesus monkeys spontaneously discriminate sets based on a conceptual singular-plural distinction. Under conditions that do not elicit comparisons based on approximate magnitudes or one-to-one correspondence, rhesus monkeys distinguished between singular and plural sets (1 vs. 2 and 1 vs. 5), but not between two plural sets (2 vs. 3, 2 vs. 4, and 2 vs. 5). These results suggest that set-relational distinctions are not a privileged part of natural language, and may have evolved in non-linguistic species to support domain general quantitative computations.


Proceedings of the Royal Society of London B: Biological Sciences | 2007

Rhesus monkeys correctly read the goal-relevant gestures of a human agent

Marc D. Hauser; David D. Glynn; Justin N. Wood

When humans point, they reveal to others their underlying intent to communicate about some distant goal. A controversy has recently emerged based on a broad set of comparative and phylogenetically relevant data. In particular, whereas chimpanzees (Pan troglodytes) have difficulty in using human-generated communicative gestures and actions such as pointing and placing symbolic markers to find hidden rewards, domesticated dogs (Canis familiaris) and silver foxes (Urocyon cinereoargenteus) readily use such gestures and markers. These comparative data have led to the hypothesis that the capacity to infer communicative intent in dogs and foxes has evolved as a result of human domestication. Though this hypothesis has met with challenges, due in part to studies of non-domesticated, non-primate animals, there remains the fundamental question of why our closest living relatives, the chimpanzees, together with other non-human primates, generally fail to make inferences about a target goal of an agents communicative intent. Here, we add an important wrinkle to this phylogenetic pattern by showing that free-ranging rhesus monkeys (Macaca mulatta) draw correct inferences about the goals of a human agent, using a suite of communicative gestures to locate previously concealed food. Though domestication and human enculturation may play a significant role in tuning up the capacity to infer intentions from communicative gestures, these factors are not necessary.


Biology Letters | 2007

The uniquely human capacity to throw evolved from a non-throwing primate: an evolutionary dissociation between action and perception.

Justin N. Wood; David D. Glynn; Marc D. Hauser

Humans are uniquely endowed with the ability to engage in accurate, high-momentum throwing. Underlying this ability is a unique morphological adaptation that enables the characteristic rotation of the arm and pelvis. What is unknown is whether the psychological mechanisms that accompany the act of throwing are also uniquely human. Here we explore this problem by asking whether free-ranging rhesus monkeys (Macaca mulatta), which lack both the morphological and neural structures to throw, nonetheless recognize the functional properties of throwing. Rhesus not only understand that human throwing represents a threat, but that some aspects of a throwing event are more relevant than others; specifically, rhesus are sensitive to the kinematics, direction and speed of the rotating arm, the direction of the throwers eye gaze and the object thrown. These results suggest that the capacity to throw did not coevolve with psychological mechanisms that accompany throwing; rather, this capacity may have built upon pre-existing perceptual processes. These results are consistent with a growing body of work showing that non-human animals often exhibit perceptual competencies that do not show up in their motor responses, suggesting evolutionary dissociations between the systems of perception that provide understanding of the world and those that mediate action on the world.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Newborn chickens generate invariant object representations at the onset of visual object experience

Justin N. Wood

To recognize objects quickly and accurately, mature visual systems build invariant object representations that generalize across a range of novel viewing conditions (e.g., changes in viewpoint). To date, however, the origins of this core cognitive ability have not yet been established. To examine how invariant object recognition develops in a newborn visual system, I raised chickens from birth for 2 weeks within controlled-rearing chambers. These chambers provided complete control over all visual object experiences. In the first week of life, subjects’ visual object experience was limited to a single virtual object rotating through a 60° viewpoint range. In the second week of life, I examined whether subjects could recognize that virtual object from novel viewpoints. Newborn chickens were able to generate viewpoint-invariant representations that supported object recognition across large, novel, and complex changes in the object’s appearance. Thus, newborn visual systems can begin building invariant object representations at the onset of visual object experience. These abstract representations can be generated from sparse data, in this case from a visual world containing a single virtual object seen from a limited range of viewpoints. This study shows that powerful, robust, and invariant object recognition machinery is an inherent feature of the newborn brain.


Annual Review of Psychology | 2010

Evolving the Capacity to Understand Actions, Intentions, and Goals

Marc D. Hauser; Justin N. Wood

We synthesize the contrasting predictions of motor simulation and teleological theories of action comprehension and present evidence from a series of studies showing that monkeys and apes-like humans-extract the meaning of an event by (a) going beyond the surface appearance of actions, attributing goals and intentions to the agent; (b) using details about the environment to infer when an action is rational or irrational; (c) making predictions about an agents goal and the most probable action to obtain the goal, within the constraints of the situation; (d) predicting the most probable outcome of actions even when they are physiologically incapable of producing the actions; and (e) combining information about means and outcomes to make decisions about social interactions, some with moral relevance. These studies reveal the limitations of motor simulation theories, especially those that rely on the notion of direct matching and mirror neuron activation. They provide support, however, for a teleological theory, rooted in an inferential process that extracts information about action means, potential goals, and the environmental constraints that limit rational action.


Attention Perception & Psychophysics | 2011

When do spatial and visual working memory interact

Justin N. Wood

This study examined how spatial working memory and visual (object) working memory interact, focusing on two related questions: First, can these systems function independently from one another? Second, under what conditions do they operate together? In a dual-task paradigm, participants attempted to remember locations in a spatial working memory task and colored objects in a visual working memory task. Memory for the locations and objects was subject to independent working memory storage limits, which indicates that spatial and visual working memory can function independently from one another. However, additional experiments revealed that spatial working memory and visual working memory interact in three memory contexts: when retaining (1) shapes, (2) integrated color-shape objects, and (3) colored objects at specific locations. These results suggest that spatial working memory is needed to bind colors and shapes into integrated object representations in visual working memory. Further, this study reveals a set of conditions in which spatial and visual working memory can be isolated from one another.

Collaboration


Dive into the Justin N. Wood's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samantha M. W. Wood

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sid Kouider

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason G. Goldman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge