Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy F. Brady is active.

Publication


Featured researches published by Timothy F. Brady.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Visual long-term memory has a massive storage capacity for object details

Timothy F. Brady; Talia Konkle; George A. Alvarez; Aude Oliva

One of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images. These results have implications for cognitive models, in which capacity limitations impose a primary computational constraint (e.g., models of object recognition), and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity.


Journal of Vision | 2011

A review of visual memory capacity: Beyond individual items and toward structured representations

Timothy F. Brady; Talia Konkle; George A. Alvarez

Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted toward a representation-based emphasis, focusing on the contents of memory and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system--going beyond quantifying how many items can be remembered and moving toward structured representations--but how we model memory systems and memory processes.


Current Biology | 2009

Spontaneous Motor Entrainment to Music in Multiple Vocal Mimicking Species

Adena Schachner; Timothy F. Brady; Irene M. Pepperberg; Marc D. Hauser

The human capacity for music consists of certain core phenomena, including the tendency to entrain, or align movement, to an external auditory pulse [1-3]. This ability, fundamental both for music production and for coordinated dance, has been repeatedly highlighted as uniquely human [4-11]. However, it has recently been hypothesized that entrainment evolved as a by-product of vocal mimicry, generating the strong prediction that only vocal mimicking animals may be able to entrain [12, 13]. Here we provide comparative data demonstrating the existence of two proficient vocal mimicking nonhuman animals (parrots) that entrain to music, spontaneously producing synchronized movements resembling human dance. We also provide an extensive comparative data set from a global video database systematically analyzed for evidence of entrainment in hundreds of species both capable and incapable of vocal mimicry. Despite the higher representation of vocal nonmimics in the database and comparable exposure of mimics and nonmimics to humans and music, only vocal mimics showed evidence of entrainment. We conclude that entrainment is not unique to humans and that the distribution of entrainment across species supports the hypothesis that entrainment evolved as a by-product of selection for vocal mimicry.


Psychological Science | 2011

Hierarchical Encoding in Visual Working Memory Ensemble Statistics Bias Memory for Individual Items

Timothy F. Brady; George A. Alvarez

Influential models of visual working memory treat each item to be stored as an independent unit and assume that there are no interactions between items. However, real-world displays have structure that provides higher-order constraints on the items to be remembered. Even in the case of a display of simple colored circles, observers can compute statistics, such as mean circle size, to obtain an overall summary of the display. We examined the influence of such an ensemble statistic on visual working memory. We report evidence that the remembered size of each individual item in a display is biased toward the mean size of the set of items in the same color and the mean size of all items in the display. This suggests that visual working memory is constructive, encoding displays at multiple levels of abstraction and integrating across these levels, rather than maintaining a veridical representation of each item independently.


The Journal of Neuroscience | 2011

Disentangling scene content from spatial boundary: complementary roles for the parahippocampal place area and lateral occipital complex in representing real-world scenes.

Soojin Park; Timothy F. Brady; Michelle R. Greene; Aude Oliva

Behavioral and computational studies suggest that visual scene analysis rapidly produces a rich description of both the objects and the spatial layout of surfaces in a scene. However, there is still a large gap in our understanding of how the human brain accomplishes these diverse functions of scene understanding. Here we probe the nature of real-world scene representations using multivoxel functional magnetic resonance imaging pattern analysis. We show that natural scenes are analyzed in a distributed and complementary manner by the parahippocampal place area (PPA) and the lateral occipital complex (LOC) in particular, as well as other regions in the ventral stream. Specifically, we study the classification performance of different scene-selective regions using images that vary in spatial boundary and naturalness content. We discover that, whereas both the PPA and LOC can accurately classify scenes, they make different errors: the PPA more often confuses scenes that have the same spatial boundaries, whereas the LOC more often confuses scenes that have the same content. By demonstrating that visual scene analysis recruits distinct and complementary high-level representations, our results testify to distinct neural pathways for representing the spatial boundaries and content of a visual scene.


Journal of Experimental Psychology: Human Perception and Performance | 2007

Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing

Timothy F. Brady; Marvin M. Chun

Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the models assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.


Psychological Science | 2010

Scene Memory Is More Detailed Than You Think The Role of Categories in Visual Long-Term Memory

Talia Konkle; Timothy F. Brady; George A. Alvarez; Aude Oliva

Observers can store thousands of object images in visual long-term memory with high fidelity, but the fidelity of scene representations in long-term memory is not known. Here, we probed scene-representation fidelity by varying the number of studied exemplars in different scene categories and testing memory using exemplar-level foils. Observers viewed thousands of scenes over 5.5 hr and then completed a series of forced-choice tests. Memory performance was high, even with up to 64 scenes from the same category in memory. Moreover, there was only a 2% decrease in accuracy for each doubling of the number of studied scene exemplars. Surprisingly, this degree of categorical interference was similar to the degree previously demonstrated for object memory. Thus, although scenes have often been defined as a superset of objects, our results suggest that scenes and objects may be entities at a similar level of abstraction in visual long-term memory.


Communicative & Integrative Biology | 2009

Detecting changes in real-world objects The relationship between visual long-term memory and change blindness

Timothy F. Brady; Talia Konkle; Aude Oliva; George A. Alvarez

A large body of literature has shown that observers often fail to notice significant changes in visual scenes, even when these changes happen right in front of their eyes. For instance, people often fail to notice if their conversation partner is switched to another person, or if large background objects suddenly disappear.1,2 These ‘change blindness’ studies have led to the inference that the amount of information we remember about each item in a visual scene may be quite low.1 However, in recent work we have demonstrated that long-term memory is capable of storing a massive number of visual objects with significant detail about each item.3 In the present paper we attempt to reconcile these findings by demonstrating that observers do not experience ‘change blindness’ with the real world objects used in our previous experiment if they are given sufficient time to encode each item. Our results (see also refs. 4 and 5) suggest that one of the major causes of change blindness for real-world objects is a lack of encoding time or attention to each object.


Psychological Science | 2008

Statistical Learning Using Real-World Scenes Extracting Categorical Regularities Without Conscious Intent

Timothy F. Brady; Aude Oliva

Recent work has shown that observers can parse streams of syllables, tones, or visual shapes and learn statistical regularities in them without conscious intent (e.g., learn that A is always followed by B). Here, we demonstrate that these statistical-learning mechanisms can operate at an abstract, conceptual level. In Experiments 1 and 2, observers incidentally learned which semantic categories of natural scenes covaried (e.g., kitchen scenes were always followed by forest scenes). In Experiments 3 and 4, category learning with images of scenes transferred to words that represented the categories. In each experiment, the category of the scenes was irrelevant to the task. Together, these results suggest that statistical-learning mechanisms can operate at a categorical level, enabling generalization of learned regularities using existing conceptual knowledge. Such mechanisms may guide learning in domains as disparate as the acquisition of causal knowledge and the development of cognitive maps from environmental exploration.


Psychological Science | 2013

Visual Long-Term Memory Has the Same Limit on Fidelity as Visual Working Memory

Timothy F. Brady; Talia Konkle; Jonathan Gill; Aude Oliva; George A. Alvarez

Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object’s color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.

Collaboration


Dive into the Timothy F. Brady's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aude Oliva

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua B. Tenenbaum

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge