Won Mok Shim
Dartmouth College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Won Mok Shim.
Nature Neuroscience | 2008
Mark A. Williams; Chris I. Baker; Hans Op de Beeck; Won Mok Shim; Sabin Dang; Christina Triantafyllou; Nancy Kanwisher
The mammalian visual system contains an extensive web of feedback connections projecting from higher cortical areas to lower areas, including primary visual cortex. Although multiple theories have been proposed, the role of these connections in perceptual processing is not understood. We found that the pattern of functional magnetic resonance imaging response in human foveal retinotopic cortex contained information about objects presented in the periphery, far away from the fovea, which has not been predicted by prior theories of feedback. This information was position invariant, correlated with perceptual discrimination accuracy and was found only in foveal, but not peripheral, retinotopic cortex. Our data cannot be explained by differential eye movements, activation from the fixation cross, or spillover activation from peripheral retinotopic cortex or from lateral occipital complex. Instead, our findings indicate that position-invariant object information from higher cortical areas is fed back to foveal retinotopic cortex, enhancing task performance.
Psychonomic Bulletin & Review | 2008
Won Mok Shim; George A. Alvarez; Yuhong V. Jiang
Humans are limited in their ability to maintain multiple attentional foci. In attentive tracking of moving objects, performance declines as the number of tracked targets increases. Previous studies have interpreted such reduction in terms of a limit in the number of attentional foci. However, increasing the number of targets usually reduces spatial separation among different targets. In this study, we examine the role of target spatial separation in maintaining multiple attentional foci. Results from a multiple-object tracking task show that tracking accuracy deteriorates as the spatial separation between targets decreases. We propose that local interaction between nearby attentional foci modulates the resolution of attention, and that capacity limitation from attentive tracking originates in part from limitations in maintaining critical spacing among multiple attentional foci. These findings are consistent with the hypothesis that tracking performance is limited not primarily by a number of locations, but by factors such as the spacing and speed of the targets and distractors.
Vision Research | 2004
Won Mok Shim; Patrick Cavanagh
Motion can influence the perceived position of nearby stationary objects (Nature Neuroscience 3 (2000) 954). To investigate the influence of high-level motion processes on the position shift while controlling for low-level motion signals, we measured the position shift as a function of the motion seen in a bistable quartet. In this stimulus, motion can be seen along either one or the other of two possible paths. An illusory position shift was observed only when the flashes were adjacent to the path where motion was perceived. If the flash was adjacent to the other path, where no motion was perceived, there was no illusory displacement. Thus for the same physical stimulus, a change in the perceived motion path determined the location where illusory position shifts would be seen. This result indicates that high-level motion processes alone are sufficient to produce the position shift of stationary objects. The effect of the timing of the test flash between the onset and offset of the motion was also examined. The position shifts were greatest at the onset of motion, then decreasing gradually, disappearing at the offset of motion. We propose an attentional repulsion explanation for the shift effect.
Journal of Vision | 2006
Tal Makovski; Won Mok Shim; Yuhong V. Jiang
Failure to detect changes to salient visual input across a brief interval has popularized the use of change detection, a paradigm that plays important roles in recent studies of visual perception, short-term memory, and consciousness. Much research has focused on the nature of visual representation for the pre- and postchange displays, yet little is known about how visual change detection is interfered with by events inserted between the pre- and postchange displays. To address this question, we tested change detection of colors, spatial locations, and natural scenes, when the interval between changes was (1) blank, (2) filled with a visual scene, or (3) filled with an auditory word. Participants were asked to either ignore the filled visual or auditory event or attend to it by categorizing it as animate or inanimate. Results showed that the ability to detect visual changes was dramatically impaired by attending to a secondary task during the delay. This interference was significant for auditory as well as for visual interfering events and was invariant to the complexity of the prechange displays. Passive listening produced no interference, whereas passive viewing produced small but significant interference. We conclude that visual change detection relies significantly on central, amodal attention.
Attention Perception & Psychophysics | 2008
Yuhong V. Jiang; Won Mok Shim; Tal Makovski
Previous studies have shown that the number of objects we can actively hold in visual working memory is smaller for more complex objects. However, complex objects are not just more complex but are often more similar to other complex objects used as test probes. To separate effects of complexity from effects of similarity, we measured visual memory following a 1-sec delay for complex and simple objects at several levels of memory-to-test similarity. When memory load was one object, memory accuracy for a face (a complex attribute) was similar to a line orientation (a simple attribute) when the face changed in steps of 10% along a morphing continuum and the line changed in steps of 5° in orientation. Performance declined with increasing memory load and increasing memory-to-test similarity. Remarkably, when memory load was three or four objects, face memory was better than orientation memory at similar changed steps. These results held when memory for line orientations was compared with that for inverted faces. We conclude that complex objects do not always exhaust visual memory more quickly than simple objects do.
Vision Research | 2005
Won Mok Shim; Patrick Cavanagh
Several studies have shown that the perceived position of a briefly presented stimulus can be displaced by nearby motion or by eye movements. We examined whether attentive tracking can also modulate the perceived position of flashed static objects when eye movements and low-level motion are controlled. Observers attentively tracked two target bars 180 degrees apart on a rotating, 12-spoke radial grating and judged the alignment of two flashes that were briefly presented, one on each side of the grating. Because of the symmetry of the 12-spoke grating, test flashes could be timed so that the rotating grating was always aligned to a standard orientation at the time of the test, while the tracked bars themselves, being only two of the 12 spokes, could probe locations that differed by multiples of 30 degrees ahead of, aligned with, or behind, the test bars. Despite the physical identity of the stimulus in each test--same orientation, same motion--the perceived position of the two flashes strongly depended on the locus of attention: when the test flashes were presented ahead of the tracked bars, a large position shift in the direction of the gratings motion was seen. If they were presented behind the tracked bars, the illusory displacement was reduced or slightly reversed. These effects of attention led us to suggest an attentional model of position distortions that links the effects seen for motion and for eye movements.
Cerebral Cortex | 2010
Won Mok Shim; George A. Alvarez; Timothy Vickery; Yuhong V. Jiang
Many everyday tasks require us to track moving objects with attention. The demand for attention increases both when more targets are tracked and when the targets move faster. These 2 aspects of attention-assigning multiple attentional foci (or indices) to targets and monitoring each focus with precision-may tap into different cognitive and brain mechanisms. In this study, we used functional magnetic resonance imaging to quantify the response profile of dorsal attentional areas to variations in the number of attentional foci and their spatiotemporal precision. Subjects were asked to track a specific spoke of either 1 or 2 pinwheels that rotated at various speeds. Their tracking performance declined both when more pinwheels were tracked and when the tracked pinwheels rotated faster. However, posterior parietal activity increased only when subjects tracked more pinwheels but remained flat when they tracked faster moving pinwheels. The frontal eye fields and early visual areas increased activity when there were more targets and when the targets rotated faster. These results suggest that the posterior parietal cortex is specifically involved in indexing independently moving targets with attention but not in monitoring each focus with precision.
Proceedings of the National Academy of Sciences of the United States of America | 2016
Edmund Chong; Ariana Familiar; Won Mok Shim
Significance The visual system is often presented with degraded and partial visual information; thus, it must extensively fill in details not present in the physical input. Does the visual system fill in object-specific information as an object’s features change in motion? Here, we show that object-specific features (orientation) can be reconstructed from neural activity in early visual cortex (V1) while objects undergo dynamic transformations. Furthermore, our results suggest that this information is not generated by averaging the physically present stimuli or by mechanisms involved in visual imagery, which also requires internal reconstruction of information not physically present. Our study provides evidence that V1 plays a unique role in dynamic filling-in of integrated visual information during kinetic object transformations via feedback signals. As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.
Vision Research | 2006
Won Mok Shim; Patrick Cavanagh
In this study, we examined the relation between motion induced position shifts and the position shifts caused by saccades. When a stimulus is flashed briefly around the time of a saccade, its perceived position is mislocalized toward the saccade target: if the flash is in front of the saccade the test flash appears shifted in the direction of eye movement; but a test flashed beyond the saccade target is displaced back toward the saccade target (bi-directional saccadic compression: Ross, J., Morrone, M. C., and Burr, D. C. (1997). Compression of visual space before saccades. Nature, 386, 598-601. Motion induced position shifts (in the absence of eye movements) have been demonstrated for a variety of stimuli but the illusory position shift is always found to be in the same direction as the motion. However, all previous studies presented the tests either along or beside the motion path, never beyond its end point. We now test this region beyond the motion path and find that the apparent location of a test in this region is shifted in the direction opposite to the motion, back toward the motion end point. In contrast, when the flash was presented between the beginning and end of the motion path, it is shifted in the direction of motion, again, toward the motion end point. These shifts together indicate a compression of perceived locations toward the end point of the apparent motion. Control experiments confirmed that this effect was neither due to the Fröhlich effect induced by apparent motion from the test flash to the second disc nor to foveal compression. The correspondence between compression toward the end point of apparent motion and saccadic compression toward the saccade target suggests that attentional shifts or planned eye movement signals may play a role in both.
Visual Cognition | 2010
Yuhong V. Jiang; Mi Young Kwon; Won Mok Shim; Bo Yeong Won
Is visual representation of an object affected by whether surrounding objects are identical to it, different from it, or absent? To address this question, we tested perceptual priming, visual short-term, and long-term memory for objects presented in isolation or with other objects. Experiment 1 used a priming procedure, where the prime display contained a single face, four identical faces, or four different faces. Subjects identified the gender of a subsequent probe face that either matched or mismatched with one of the prime faces. Priming was stronger when the prime was four identical faces than when it was a single face or four different faces. Experiments 2 and 3 asked subjects to encode four different objects presented on four displays. Holding memory load constant, visual memory was better when each of the four displays contained four duplicates of a single object, than when each display contained a single object. These results suggest that an objects perceptual and memory representations are enhanced when presented with identical objects, revealing redundancy effects in visual processing.