Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joo-Hyun Song is active.

Publication


Featured researches published by Joo-Hyun Song.


Trends in Cognitive Sciences | 2009

Hidden cognitive states revealed in choice reaching tasks.

Joo-Hyun Song; Ken Nakayama

Perceptual and cognitive processes have largely been inferred based on reaction times and accuracies obtained from discrete responses. However, discrete responses are unlikely to capture dynamic internal processes, occurring in parallel, and unfolding over time. Recent studies measuring continuous hand movements during target choice reaching tasks reveal the temporal evolution of hidden internal events. For instance, the direction of curved reaching trajectories reflects attention, language representations and the spatial number line, in addition to interactions between the ventral and dorsal visual streams. This elucidates the flow of earlier cognitive states into motor outputs. Thus, this line of research provides new opportunities to integrate information across different disciplines such as perception, cognition and action, which have usually been studied in isolation.


Vision Research | 2008

Target selection in visual search as revealed by movement trajectories

Joo-Hyun Song; Ken Nakayama

We examined target selection for visually guided reaching movements in visual search, in which participants reached to an odd-colored target presented with two homogenous distractors. The colors of the target and distractors were randomly switched for each trial between red and green, and the location of the target was varied. Therefore either color could be a distractor or target, and the identity was resolved by grouping two distractors having the same color. Thus, there was ongoing competition between a target and distractors. In some trials, reaches were directed to the target, and in other trials, reaches were initially directed towards a distractor and corrected in mid-flight, showing highly curved trajectories. Interestingly, trials with highly curved trajectories were no less efficient in terms of accuracy or total time. The extra time taken up in movement duration was offset by shorter initial latencies. By analyzing curved trajectories, we demonstrated that corrective movements occur shortly after the onset of initial movement, suggesting that a corrective new target is selected even before initial movement is executed. This provides an explanation as to why misdirected reaches, hastily initiated, can be corrected with minimal loss in overall efficiency. In addition, our results show that the details of movement trajectories allow us to visualize the dynamics of target selection as they unfold in time.


NeuroImage | 2006

Visual working memory for simple and complex features: an fMRI study.

Joo-Hyun Song; Yuhong V. Jiang

Visual working memory (VWM) allows us to hold visual information briefly in our minds after its disappearance. It is important for bridging the present to the immediate past. Previous neuroscience studies on VWM have shown that several parietal, frontal, and occipitotemporal brain regions subserve this function. Those studies, however, have often focused on VWM for a single property, such as color. Yet, in behavior, the capacity of VWM is sensitive to the complexity of to-be-remembered visual features. How do different brain areas represent VWM for visual features of different complexity and for combination of features? To address this question, we used functional MRI to study the response profile of several brain regions in three VWM tasks. In all tasks, subjects saw 1 to 7 colored polygons and had to remember their color (a simple feature), shape (a complex feature), or both color and shape. Behavioral performance showed that VWM reached capacity limit at about 3 colors, 2 shapes, and 2 compound objects. In the fMRI data, we found different functional profiles for frontal, parietal, and occipitotemporal regions. Specifically, the posterior parietal cortex was sensitive to both featural and VWM load manipulations. The prefrontal regions were sensitive to VWM load manipulation but relatively insensitive to featural differences. The occipitotemporal regions were sensitive to featural differences, but not to VWM load manipulation. We propose that the response properties of these regions can jointly account for several findings in human VWM behavior.


Journal of Vision | 2006

Role of focal attention on latencies and trajectories of visually guided manual pointing.

Joo-Hyun Song; Ken Nakayama

Previous studies have shown that an odd-colored target among uniformly colored distractors can be rapidly detected and localized using broadly distributed attention over an entire display. In the current study, we show that such a broadly distributed attentional allocation is not sufficient for seemingly effortless goal-directed manual pointing. Latencies and movement durations of manual pointing in odd-colored search tasks become shorter and curved trajectories decreased as the number of distractors increase or target color repetitions increases. Because these manipulations have been shown to facilitate the deployment of narrowly focused attention to a target but not for distributed attention, this adds further support to the view that focal attention is necessary for goal-directed action. In addition, the presence of highly curved movement trajectories, directed first to a distractor then to the target reflects ongoing changes in focal attentional deployment and target selection.


Psychonomic Bulletin & Review | 2005

High-capacity spatial contextual memory

Yuhong V. Jiang; Joo-Hyun Song; Amanda Rigas

Humans show implicit memory for complex spatial layouts, which aids in subsequent processing of these layouts. Research efforts in the past 5 years have focused primarily on a single session of training involving a dozen repeated displays. Yet every day, people encounter many more visual layouts than were presented in such experiments. In this study, we trained subjects to learn 60 repeated displays, randomly intermixed within 1,800 nonrepeated displays, spread over 5 consecutive days. On each day, the subjects conducted visual search on 360 new displays and a new set of 12 repeated displays, each repeated 30 times. Contextual memory was observed daily. One week after the fifth session, the subjects still searched faster on the repeated displays learned previously. We conclude that the visual system has a high capacity for learning and retaining repeated spatial context, an ability that may compensate for our severe limitations in visual attention and working memory.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Deficits in reach target selection during inactivation of the midbrain superior colliculus

Joo-Hyun Song; Robert D. Rafal; Robert M. McPeek

Purposive action requires the selection of a single movement goal from multiple possibilities. Neural structures involved in movement planning and execution often exhibit activity related to target selection. A key question is whether this activity is specific to the type of movement produced by the structure, perhaps consisting of a competition among effector-specific movement plans, or whether it constitutes a more abstract, effector-independent selection signal. Here, we show that temporary focal inactivation of the primate superior colliculus (SC), an area involved in eye-movement target selection and execution, causes striking target selection deficits for reaching movements, which cannot be readily explained as a simple impairment in visual perception or motor execution. This indicates that target selection activity in the SC does not simply represent a competition among eye-movement goals and, instead, suggests that the SC contributes to a more general purpose priority map that influences target selection for other actions, such as reaches.


Journal of Experimental Psychology: Human Perception and Performance | 2005

Hyperspecificity in Visual Implicit Learning: Learning of Spatial Layout Is Contingent on Item Identity.

Yuhong V. Jiang; Joo-Hyun Song

Humans conduct visual search faster when the same display is presented for a 2nd time, showing implicit learning of repeated displays. This study examines whether learning of a spatial layout transfers to other layouts that are occupied by items of new shapes or colors. The authors show that spatial context learning is sometimes contingent on item identity. For example, when the training session included some trials with black items and other trials with white items, learning of the spatial layout became specific to the trained color--no transfer was seen when items were in a new color during testing. However, when the training session included only trials in black (or white), learning transferred to displays with a new color. Similar results held when items changed shapes after training. The authors conclude that implicit visual learning is sensitive to trial context and that spatial context learning can be identity contingent.


Journal of Vision | 2007

Automatic adjustment of visuomotor readiness

Joo-Hyun Song; Ken Nakayama

Participants initiated a reaching movement to a single target more rapidly than to an odd-color target among distractors when the two trial types were presented in separate blocks, reflecting differentiated states of sensorimotor readiness for a relatively easy (single target) versus harder (odd-color target) tasks. This pattern was eliminated when the two trial types were randomly mixed. Latencies for the easy single trials increased, and those for the harder odd-color trials decreased, showing homogenization. The faster movement initiation in the odd-color target task was accompanied by curved trajectories, directed toward a distractor initially but corrected in midflight. Two possible hypotheses could account for this differentiated adjustment in visuomotor readiness: (1) explicit knowledge of upcoming trial types and (2) implicit leaning derived from history of the very recent past, that is, repetition of the same type of trials. To distinguish between these two accounts, we included a third condition where the trial types were predictably alternated. Contrary to the explicit knowledge hypothesis, this also led to homogenization of initiation latencies, and curved trajectories. We conclude that visuomotor readiness is automatically adjusted by the recent experience of trial difficulty.


Journal of Vision | 2011

The eye dominates in guiding attention during simultaneous eye and hand movements

Aarlenne Z. Khan; Joo-Hyun Song; Robert M. McPeek

Prior to the onset of a saccade or a reach, attention is directed to the goal of the upcoming movement. However, it remains unknown whether attentional resources are shared across effectors for simultaneous eye and hand movements. Using a 4-AFC shape discrimination task, we investigated attentional allocation during the planning of a saccade alone, reach alone, or combined saccade and reach to one of five peripheral locations. Target discrimination was better when the probe appeared at the goal of the impending movement than when it appeared elsewhere. However, discrimination performance at the movement goal was not better for combined eye-hand movements compared to either effector alone, suggesting a shared limited attentional resource rather than separate pools of effector-specific attention. To test which effector dominates in guiding attention, we then separated eye and hand movement goals in two conditions: (1) cued reach/fixed saccade--subjects made saccades to the same peripheral location throughout the block, while the reach goal was cued and (2) cued saccade/fixed reach--subjects made reaches to the same location, while the saccade goal was cued. For both conditions, discrimination performance was consistently better at the eye goal than the hand goal. This indicates that shared attentional resources are guided predominantly by the eye during the planning of eye and hand movements.


Journal of Neurophysiology | 2009

Eye-Hand Coordination During Target Selection in a Pop-Out Visual Search

Joo-Hyun Song; Robert M. McPeek

We examined the coordination of saccades and reaches in a visual search task in which monkeys were rewarded for reaching to an odd-colored target among distractors. Eye movements were unconstrained, and monkeys typically made one or more saccades before initiating a reach. Target selection for reaching and saccades was highly correlated with the hand and eyes landing near the same final stimulus both for correct reaches to the target and for incorrect reaches to a distractor. Incorrect reaches showed a bias in target selection: they were directed to the distractor in the same hemifield as the target more often than to other distractors. A similar bias was seen in target selection for the initial saccade in correct reaching trials with multiple saccades. We also examined the temporal coupling of saccades and reaches. In trials with a single saccade, a reaching movement was made after a fairly stereotyped delay. In multiple-saccade trials, a reach to the target could be initiated near or even before the onset of the final target-directed saccade. In these trials, the initial trajectory of the reach was often directed toward the fixated distractor before veering toward the target around the time of the final saccade. In virtually all cases, the eyes arrived at the target before the hand, and remained fixated until reach completion. Overall, these results are consistent with flexible temporal coupling of saccade and reach initiation, but fairly tight coupling of target selection for the two types of action.

Collaboration


Dive into the Joo-Hyun Song's collaboration.

Top Co-Authors

Avatar

Robert M. McPeek

Smith-Kettlewell Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher D. Erb

University of North Carolina at Greensboro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Naomi Takahashi

Smith-Kettlewell Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge