Mark W. Schurgin
Johns Hopkins University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark W. Schurgin.
Journal of Vision | 2014
Mark W. Schurgin; J. Nelson; S. Iida; H. Ohira; Joan Y. Chiao; Steven Franconeri
When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.
Journal of Experimental Psychology: General | 2017
Mark W. Schurgin; Jonathan Flombaum
Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints—often characterized as ‘Core Knowledge’—are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition.
Visual Cognition | 2013
Mark W. Schurgin; Zachariah M. Reagh; Michael A. Yassa; Jonathan Flombaum
Amishav, R., & Kimchi, R. (2010). Perceptual integrality of componential and configural information in faces. Psychonomic Bulletin and Review, 17(5), 743–748. doi:10.3758/PBR.17.5.743 Curby, K. M., Goldstein, R. R., & Blacker, K. (2013). Disrupting perceptual grouping of face parts impairs holistic face processing. Attention, Perception, and Psychophysics, 75(1), 83–91. doi:10.3758/s13414-012-0386-9 Garner, W. R. (1974). The processing of information and structure. Potomac, MD: Lawrence Erlbaum Associates. Richler, J. J., & Gauthier, I. (2013). A meta-analysis of holistic face processing. Manuscript under review. Richler, J. J., Tanaka, J. W., Brown, D. D., & Gauthier, I. (2008). Why does selective attention to parts fail in face processing? Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(6), 1356–1368. doi:10.1037/a0013080 Richler, J. J., Wong, Y. K., &Gauthier, I. (2011). Perceptual expertise as a shift from strategic interference to automatic holistic processing. Current Directions in Psychological Science, 20(2), 129–134. doi:10.1177/0963721411402472
bioRxiv | 2018
Mark W. Schurgin; John T. Wixted; Timothy F. Brady
Almost all models of visual memory implicitly assume that errors in mnemonic representations are linearly related to distance in stimulus space. Here, we show that neither memory nor perception are appropriately scaled in stimulus space; instead, they are based on a transformed similarity representation that is non-linearly related to stimulus space. This result calls into question a foundational assumption of extant models of visual working memory. Once psychophysical similarity is taken into account, aspects of memory that have been thought to demonstrate a fixed working memory capacity of ~3-4 items and to require fundamentally different representations -- across different stimuli, tasks, and types of memory -- can be parsimoniously explained with a unitary signal detection framework. These results have significant implications for the study of visual memory and lead to a substantial reinterpretation of the relationship between perception, working memory and long-term memory.Limits on the storage capacity of working memory have been investigated for decades, but the nature of those limits remains elusive. An important but largely overlooked consideration in this research concerns the relationship between the physical properties of stimuli used in visual working memory tasks and their psychological properties. Here, we show that the relationship between physical distance in stimulus space and the psychological confusability of items as measured in a perceptual task is non-linear. Taking into account this relationship leads to a parsimonious conceptualization of visual working memory, greatly simplifying the models needed to account for performance, allowing generalization to new stimulus spaces, and providing a mapping between tasks that have been thought to measure distinct qualities. In particular, performance across a variety of working memory tasks can be explained by a one-parameter model implemented within a signal detection framework. Moreover, despite the system-level distinctions between working and long-term memory, after taking into account psychological distance we find a strong affinity between the theoretical frameworks that guide both systems, as performance is accurately described using the same straightforward signal detection framework.
Journal of Experimental Psychology: Human Perception and Performance | 2018
Mark W. Schurgin; Jonathan Flombaum
Human visual memory is tolerant, meaning that it supports object recognition despite variability across encounters at the image level. Tolerant object recognition remains one capacity in which artificial intelligence trails humans. Typically, tolerance is described as a property of human visual long-term memory (VLTM). In contrast, visual working memory (VWM) is not usually ascribed a role in tolerant recognition, with tests of that system usually demanding discriminatory power—identifying changes, not sameness. There are good reasons to expect that VLTM is more tolerant; functionally, recognition over the long-term must accommodate the fact that objects will not be viewed under identical conditions; and practically, the passive and massive nature of VLTM may impose relatively permissive criteria for thinking that two inputs are the same. But empirically, tolerance has never been compared across working and long-term visual memory. We therefore developed a novel paradigm for equating encoding and test across different memory types. In each experiment trial, participants saw two objects, memory for one tested immediately (VWM) and later for the other (VLTM). VWM performance was better than VLTM and remained robust despite the introduction of image and object variability. In contrast, VLTM performance suffered linearly as more variability was introduced into test stimuli. Additional experiments excluded interference effects as causes for the observed differences. These results suggest the possibility of a previously unidentified role for VWM in the acquisition of tolerant representations for object recognition.
Scientific Reports | 2017
Bogdan Petre; Pascal Tétreault; V. A. Mathur; Mark W. Schurgin; Joan Y. Chiao; Lejian Huang; A. V. Apkarian
Pain perception temporarily exaggerates abrupt thermal stimulus changes revealing a mechanism for nociceptive temporal contrast enhancement (TCE). Although the mechanism is unknown, a non-linear model with perceptual feedback accurately simulates the phenomenon. Here we test if a mechanism in the central nervous system underlies thermal TCE. Our model successfully predicted an optimal stimulus, incorporating a transient temperature offset (step-up/step-down), with maximal TCE, resulting in psychophysically verified large decrements in pain response (“offset-analgesia”; mean analgesia: 85%, n = 20 subjects). Next, this stimulus was delivered using two thermodes, one delivering the longer duration baseline temperature pulse and the other superimposing a short higher temperature pulse. The two stimuli were applied simultaneously either near or far on the same arm, or on opposite arms. Spatial separation across multiple peripheral receptive fields ensures the composite stimulus timecourse is first reconstituted in the central nervous system. Following ipsilateral stimulus cessation on the high temperature thermode, but before cessation of the low temperature stimulus properties of TCE were observed both for individual subjects and in group-mean responses. This demonstrates a central integration mechanism is sufficient to evoke painful thermal TCE, an essential step in transforming transient afferent nociceptive signals into a stable pain perception.
Visual Cognition | 2015
Mark W. Schurgin; Jonathan Flombaum
ABSTRACT Recent research suggests that visual long-term memory (VLTM) and visual working memory (VWM) possess similar resolution for feature memory. We investigated resolution in more holistic terms, exploiting a two alternative forced choice (2AFC) procedure and injecting noise into stimuli. Participants were exposed to two real-world objects per trial in a VWM experiment. One object was tested after a short delay. The other object (from each trial) was tested in a later session. We observed better performance for VWM, and the two systems were affected by noise in distinct ways. These results have broad implications for theories of visual memory.
Attention Perception & Psychophysics | 2014
Mark W. Schurgin; Jonathan Flombaum
Reproducing the location of an object from the contents of spatial working memory requires the translation of a noisy representation into an action at a single location—for instance, a mouse click or a mark with a writing utensil. In many studies, these kinds of actions result in biased responses that suggest distortions in spatial working memory. We sought to investigate the possibility of one mechanism by which distortions could arise, involving an interaction between undistorted memories and nonuniformities in attention. Specifically, the resolution of attention is finer below than above fixation, which led us to predict that bias could arise if participants tend to respond in locations below as opposed to above fixation. In Experiment 1 we found such a bias to respond below the true position of an object. Experiment 2 demonstrated with eye-tracking that fixations during response were unbiased and centered on the remembered object’s true position. Experiment 3 further evidenced a dependency on attention relative to fixation, by shifting the effect horizontally when participants were required to tilt their heads. Together, these results highlight the complex pathway involved in translating probabilistic memories into discrete actions, and they present a new attentional mechanism by which undistorted spatial memories can lead to distorted reproduction responses.
Attention Perception & Psychophysics | 2018
Mark W. Schurgin
The majority of research on visual memory has taken a compartmentalized approach, focusing exclusively on memory over shorter or longer durations, that is, visual working memory (VWM) or visual episodic long-term memory (VLTM), respectively. This tutorial provides a review spanning the two areas, with readers in mind who may only be familiar with one or the other. The review is divided into six sections. It starts by distinguishing VWM and VLTM from one another, in terms of how they are generally defined and their relative functions. This is followed by a review of the major theories and methods guiding VLTM and VWM research. The final section is devoted toward identifying points of overlap and distinction across the two literatures to provide a synthesis that will inform future research in both fields. By more intimately relating methods and theories from VWM and VLTM to one another, new advances can be made that may shed light on the kinds of representational content and structure supporting human visual memory.
bioRxiv | 2018
Mark W. Schurgin; Corbin Cunningham; Howard E. Egeth; Timothy F. Brady
Humans have remarkable visual long-term memory abilities, capable of storing thousands of objects with significant detail. However, it remains unknown how such memory is utilized during the short-term maintenance of information. Specifically, if people have a previously encoded memory for an item, how does this affect subsequent working memory for that same item? Here, we demonstrate people can quickly and accurately make use of visual long-term memories and therefore maintain less perceptual information actively in working memory. We assessed how much perceptual information is actively maintained in working memory by measuring neural activity during the delay period of a working memory task using electroencephalography. We find that despite maintaining less perceptual information in working memory when long-term memory representations are available, there is no decrement in memory performance. This suggests under certain circumstances people can dynamically disengage working memory maintenance and instead use long-term memories when available. However, this does not mean participants always utilize long-term memory. In a follow-up experiment, we introduced additional perceptual interference into working memory and found participants actively maintained items in working memory even when they had existing long-term memories available. These results clarify the kinds of conditions under which long-term and working memory operate. Specifically, working memory is engaged when new information is encountered or perceptual interference is high. Visual long-term memory may otherwise be rapidly accessed and utilized in lieu of active perceptual maintenance. These data demonstrate the interactions between working memory and long-term memory are more dynamic and fluid than previously thought.