Judith E. Fan
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Judith E. Fan.
Cognition | 2013
Judith E. Fan; Nicholas B. Turk-Browne
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals.
Journal of Vision | 2016
Judith E. Fan; J. Benjamin Hutchinson; Nicholas B. Turk-Browne
When perception is underdetermined by current sensory inputs, memories for related experiences in the past might fill in missing detail. To evaluate this possibility, we measured the likelihood of relying on long-term memory versus sensory evidence when judging the appearance of an object near the threshold of awareness. Specifically, we associated colors with shapes in long-term memory and then presented the shapes again later in unrelated colors and had observers judge the appearance of the new colors. We found that responses were well characterized as a bimodal mixture of original and current-color representations (vs. an integrated unimodal representation). That is, although irrelevant to judgments of the current color, observers occasionally anchored their responses on the original colors in memory. Moreover, the likelihood of such memory substitutions increased when sensory input was degraded. In fact, they occurred even in the absence of sensory input when observers falsely reported having seen something. Thus, although perceptual judgments intuitively seem to reflect the current state of the environment, they can also unknowingly be dictated by past experiences.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2016
Judith E. Fan; Nicholas B. Turk-Browne
Holding recently experienced information in mind can help us achieve our current goals. However, such immediate and direct forms of guidance from working memory are less helpful over extended delays or when other related information in long-term memory is useful for reaching these goals. Here we show that information that was encoded in the past but is no longer present or relevant to the task also guides attention. We examined this by associating multiple unique features with novel shapes in visual long-term memory (VLTM), and subsequently testing how memories for these objects biased the deployment of attention. In Experiment 1, VLTM for associated features guided visual search for the shapes, even when these features had never been task-relevant. In Experiment 2, associated features captured attention when presented in isolation during a secondary task that was completely unrelated to the shapes. These findings suggest that long-term memory enables a durable and automatic type of memory-based attentional control. (PsycINFO Database Record
Journal of Experimental Psychology: Human Perception and Performance | 2016
Judith E. Fan; Nicholas B. Turk-Browne; Jordan A. Taylor
We often interact with multiple objects at once, such as when balancing food and beverages on a dining tray. The success of these interactions relies upon representing not only individual objects, but also statistical summary features of the group (e.g., center-of-mass). Although previous research has established that humans can readily and accurately extract such statistical summary features, how this ability is acquired and refined through experience currently remains unaddressed. Here we ask if training and task feedback can improve summary perception. During training, participants practiced estimating the centroid (i.e., average location) of an array of objects on a touchscreen display. Before and after training, they completed a transfer test requiring perceptual discrimination of the centroid. Across 4 experiments, we manipulated the information in task feedback and how participants interacted with the objects during training. We found that vector error feedback, which conveys error both in terms of distance and direction, was the only form of feedback that improved perceptual discrimination of the centroid on the transfer test. Moreover, this form of feedback was effective only when coupled with reaching movements toward the visual objects. Taken together, these findings suggest that sensory-prediction error-signaling the mismatch between expected and actual consequences of an action-may play a previously unrecognized role in tuning perceptual representations. (PsycINFO Database Record
international conference on computer graphics and interactive techniques | 2014
Judith E. Fan; Daniel Yamins; James J. DiCarlo; Nicholas B. Turk-Browne
1 Introduction Humans have devised a wide range of technologies for creating visual representations of real-world objects. Some are ancient (e.g., line drawings using a stylus), while others are very modern (e.g., ptography and 3D computer graphics rendering). Despite large differences in the images produced by these differing modalities (e.g., sparse contours in sketches vs. continuous hue variation in photographs), all are effective at evoking the original real-world object.
F1000Research | 2014
Jonas Everaert; Judith E. Fan; Ernst H. W. Koster; Nicholas B. Turk-Browne
Cognitive processes such as attention and memory are closely related to ones emotional state: Healthy individuals pay more attention to and better remember positively valenced stimuli, whereas anxious and depressed individuals are biased toward negatively valenced stimuli. Although such biases are well-documented, the underlying mechanisms are unclear. Here we examine how emotional associations in long-term memory guide spatial attention. In an initial encoding phase, distinct colors were consistently paired with faces depicting either happy, neutral, or angry expressions while participants performed a gender-discrimination cover task. In a subsequent test phase, two lines were presented on each trial, one tilted away from vertical (target) and the other vertical (distractor), and participants located the targets position. Colored disks framed each line: one color was associated with happy or angry faces and the other was associated with neutral faces. Colors associated with emotional faces framed the target (valid) and distractor (invalid) with equal probability, and target location was randomized colors were thus completely task-irrelevant. Attentional capture was quantified as invalid minus valid RT for a given valence category. We found that attentional capture for colors associated with happy faces was significantly predicted by subsequent memory for these color-expression associations. This was not true for colors associated with angry faces, suggesting a dissociation between attentional guidance from positive vs. negative information. Interestingly, individual differences in depression and anxiety levels were negatively correlated with the degree of attentional capture by colors associated with happy faces (e.g., more depressed individuals showed less capture from stimuli with positive associations). Taken together, these findings contribute to our understanding of how basic cognitive processes are modulated by emotion in both healthy and clinical populations. Namely, negative biases in anxiety and depression do not merely reflect motivational or decisional factors, but partly arise from more automatic forms of attentional capture.
Behavioral and Brain Sciences | 2014
Judith E. Fan; Jordan W. Suchow
Bentley et al.s framework assigns phenomena of personal and collective decision-making to regions of a dual-axis map. Here, we propose that understanding the collective dynamics of decision-making requires consideration of factors that guide movement across the map. One such factor is self-awareness, which can lead a group to seek out new knowledge and re-position itself on the map.
Visual Cognition | 2013
Judith E. Fan; Nicholas B. Turk-Browne; Jordan A. Taylor
The amount of sensory data encountered by the visual system often exceeds its processing capacity. One solution is to exploit statistical structure in the natural environment to generate a more efficient representation of the information (Simoncelli & Olshausen, 2001). For example, the visual system may construct a “statistical summary representation” over groups of visual objects, reflecting their general properties (Alvarez, 2011). Indeed, it has been shown that observers are able to quickly and accurately extract average values over a range of visual feature dimensions, including size (Chong & Treisman, 2003), orientation (Parkes et al. 2001), and emotional expression (Haberman & Whitney, 2007). However, it remains an open question as to how observers learn to produce such accurate estimates of these summary statistics. Although good performance on these tasks suggests that summary features are readily accessible, it is not clear to what extent these statistical operations are performed automatically—integrating over sensory information in an unsupervised fashion, or are penetrable to task demands—flexibly incorporating observer goals and error-related feedback to maximize performance (Bauer, 2009; Myczek & Simons, 2008). In the present study, we sought to understand the role of learning in statistical summary representations. Specifically, we examined the contribution of task practice and performance feedback to perceptual discrimination of the centroid (i.e., mean location) of a set of objects (Alvarez & Oliva, 2008). We hypothesized that providing vector error feedback (i.e., containing both distance and direction information) while observers practiced making pointing movements toward the centroid would improve the fidelity of their centroid representations. This improvement might be reflected in reduced error during training and lower discrimination thresholds in an independent perceptual test.
Cognitive Science | 2018
Judith E. Fan; Daniel Yamins; Nicholas B. Turk-Browne
Production and comprehension have long been viewed as inseparable components of language. The study of vision, by contrast, has centered almost exclusively on comprehension. Here we investigate drawing-the most basic form of visual production. How do we convey concepts in visual form, and how does refining this skill, in turn, affect recognition? We developed an online platform for collecting large amounts of drawing and recognition data, and applied a deep convolutional neural network model of visual cortex trained only on natural images to explore the hypothesis that drawing recruits the same abstract feature representations that support natural visual object recognition. Consistent with this hypothesis, higher layers of this model captured the abstract features of both drawings and natural images most important for recognition, and people learning to produce more recognizable drawings of objects exhibited enhanced recognition of those objects. These findings could explain why drawing is so effective for communicating visual concepts, they suggest novel approaches for evaluating and refining conceptual knowledge, and they highlight the potential of deep networks for understanding human learning.
Journal of Vision | 2015
Judith E. Fan; Daniel Yamins; Nicholas B. Turk-Browne
Drawing is a powerful tool for communicating ideas visually - a few well-placed strokes can convey the identity of a face, object, or scene. Here we examine how people learn to draw real-world objects in order to understand the more general consequences of visual production on the representation of objects in the mind. As a case study, we ask: How does practice drawing particular objects affect the way that those and other objects are represented? Participants played an online game in which they were prompted on each trial with an image (N=314) or word (N=276) that referred to a target object for them to draw. We used a high-performing, deep convolutional neural network model of ventral visual cortex to guess the identity of the drawn object in real time, providing participants immediate feedback about the quality of their drawing. Objects belonged to one of eight categories, each containing eight items. Each participant was randomly assigned two of these categories. During training, participants drew four randomly selected objects in one category (Trained) multiple times. Before and after training, participants drew the other four objects in that category (Near), as well as the objects in the second category (Far), once each. We found that drawings of Trained items were better recognized by the model after training, and that this improvement reflected decreased confusions with other items in the same category. By contrast, recognition of Near items worsened after training, which reflected increased within-category confusion. Recognition of Far items did not change significantly. These results show that visual production can reshape the representational space for objects: by differentiating trained objects and merging other objects nearby in the space. More broadly, these findings suggest that the outward expression of visual concepts can itself bring about changes to their internal representation. Meeting abstract presented at VSS 2015.