Hong-Jin Sun
McMaster University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hong-Jin Sun.
Experimental Brain Research | 2004
Hong-Jin Sun; Jennifer L. Campos; George S. W. Chan
One of the fundamental requirements for successful navigation through an environment is the continuous monitoring of distance travelled. To do so, humans normally use one or a combination of visual, proprioceptive/efferent, vestibular, and temporal cues. In the real world, information from one sensory modality is normally congruent with information from other modalities; hence, studying the nature of sensory interactions is often difficult. In order to decouple the natural covariation between different sensory cues, we used virtual reality technology to vary the relation between the information generated from visual sources and the information generated from proprioceptive/efferent sources. When we manipulated the stimuli such that the visual information was coupled in various ways to the proprioceptive/efferent information, human subjects predominantly used visual information to estimate the ratio of two traversed path lengths. Although proprioceptive/efferent information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive/efferent information was inconsistent with the visual information. These results convincingly demonstrated that active movement (locomotion) facilitates visual perception of path length travelled.
Perception | 2004
Hong-Jin Sun; Jennifer L. Campos; Meredith Young; George S. W. Chan; Colin G. Ellard
By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 180° before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demonstrated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an ‘under-perception’ of movement relative to conditions in which visual information was absent during locomotion.
Memory & Cognition | 2004
Hong-Jin Sun; George S. W. Chan; Jennifer L. Campos
In this study, we examined the orientation dependency of spatial representations following various learning conditions. We assessed the spatial representations of human participants after they had learned a complex spatial layout via map learning, via navigating within a real environment, or via navigating through a virtual simulation of that environment. Performances were compared between conditions involving (1) multiple- versus single-body orientation, (2) active versus passive learning, and (3) high versus low levels of proprioceptive information. Following learning, the participants were required to produce directional judgments to target landmarks. Results showed that the participants developed orientation-specific spatial representations following map learning and passive learning, as indicated by better performance when tested from the initial learning orientation. These results suggest that neither the number of vantage points nor the level of proprioceptive information experienced are determining factors; rather, it is theactive aspect of direct navigation that leads to the development of orientation-free representations.
Neuroscience Letters | 2009
Qiang Liu; Hong Li; Jennifer L. Campos; Qi Wang; Ye Zhang; Jiang Qiu; Qinglin Zhang; Hong-Jin Sun
This study examined the electrophysiological bases of the effect of language on color perception. In a visual search task, a target was presented to the left or right visual field. The target color was either from the same category as a set of distractors (within-category condition) or from a different category (between-category condition). For both category conditions, the targets elicited a clear N2pc (N2-posterior-contralateral) component in the event-related potential (ERP) in the contralateral hemisphere. In the left hemisphere only, the N2pc amplitude for the between-category condition was larger than that for the within-category condition. These results indicate that the N2pc could be used as an index to describe the lateralization effect of language on color perception.
European Journal of Neuroscience | 2010
Jennifer L. Campos; Patrick Byrne; Hong-Jin Sun
Optic flow is the stream of retinal information generated when an observer’s body, head or eyes move relative to their environment, and it plays a defining role in many influential theories of active perception. Traditionally, studies of optic flow have used artificially generated flow in the absence of the body‐based cues typically coincident with self‐motion (e.g. proprioceptive, efference copy, and vestibular). While optic flow alone can be used to judge the direction, speed and magnitude of self‐motion, little is known about the precise extent to which it is used during natural locomotor behaviours such as walking. In this study, walked distances were estimated in an open outdoor environment. This study employed two novel complementary techniques to dissociate the contributions of optic flow from body‐based cues when estimating distance travelled in a flat, open, outdoor environment void of distinct proximal visual landmarks. First, lenses were used to magnify or minify the visual environment. Second, two walked distances were presented in succession and were either the same or different in magnitude; vision was either present or absent in each. A computational model was developed based on the results of both experiments. Highly convergent cue‐weighting values were observed, indicating that the brain consistently weighted body‐based cues about twice as high as optic flow, the combination of the two cues being additive. The current experiments represent some of the first to isolate and quantify the contributions of optic flow during natural human locomotor behaviour.
Journal of Vision | 2012
Guang Zhao; Qiang Liu; Jun Jiao; Peiling Zhou; Hong Li; Hong-Jin Sun
The repeated configurations of random elements induce a better search performance than that of the displays of novel random configurations. The mechanism of such contextual cueing effect has been investigated through the use of the RT × Set Size function. There are divergent views on whether the contextual cueing effect is driven by attentional guidance or facilitation of initial perceptual processing or response selection. To explore this question, we used eye movement recording in this study, which offers information about the substages of the search task. The results suggest that the contextual cueing effect is contributed mainly by attentional guidance, and facilitation of response selection also plays a role.
Experimental Brain Research | 1997
Hong-Jin Sun; Barrie J. Frost
Abstract Recent psychophysical and neurophysiological studies have suggested that, in mammals, there are interactions between the P (colour processing) and M (motion processing) visual pathways, which were previously believed to be parallel and separate. In this study, the role colour information plays in the coding of object motion was determined in the tectofugal pathway of pigeons. The responses of motion-sensitive neurons in the tectum to moving stimuli formed by chromatic contrast were recorded extracellularly using standard single-unit recording techniques. A moving coloured object was presented on a uniform (opponent coloured) background (e.g. blue-on-yellow, red-on-green and black-on-white). Through systematically manipulation of the luminance contrast between object and background, an equiluminant condition was generated. It was found that, at chromatic equiluminance, the majority of cells maintain some level of response. The mean magnitude of the response at equiluminance was about one-third of the response at maximal contrast to the same chromatic border. These results suggest that tectal units can detect motion of a pattern defined by a pure colour contour, although the strength of output is considerably weaker than that for the movement of patterns formed by luminance contrast.
Journal of Vision | 2015
Xin Gao; Hongmei Yan; Hong-Jin Sun
Microsaccades (MSs) are small eye movements that occur during attempted visual fixation. While most studies concerning MSs focus on their roles in visual processing, some also suggest that the MS rate can be modulated by the amount of mental exertion involved in nonvisual processing. The current study focused on the effects of task difficulty on MS rate in a nonvisual mental arithmetic task. Experiment 1 revealed a general inverse relationship between MS rate and subjective task difficulty. During Experiment 2, three task phases with different requirements were identified: during calculation (between stimulus presentation and response), postcalculation (after reporting an answer), and a control condition (undergoing a matching sequence of events without the need to make a calculation). MS rate was observed to approximately double from the during-calculation phase to the postcalculation phase, and was significantly higher in the control condition compared to postcalculation. Only during calculation was the MS rate generally decreased with greater task difficulty. Our results suggest that the nonvisual cognitive processing can suppress MS rate, and that the extent of such suppression is related to the task difficulty.
Journal of Vision | 2011
Jing-Jiang Yan; Bailey Lorv; Hong Li; Hong-Jin Sun
As an object approaches an observers eye, the optical variable tau, defined as the inverse relative expansion rate of the objects image on the retina (D. N. Lee, 1976), approximates the time to collision (TTC). Many studies have provided support that human observers use TTC, but evidence for the exclusive use of TTC generated by tau remains inconclusive. In the present study, observers were presented with a visual display of two sequentially approaching objects and asked to compare their TTCs at the moment these objects vanished. Upon dissociating several variables that may have potentially contributed to TTC perception, we found that observers were most sensitive to TTC information when completing the task and less sensitive to non-time variables, such as those that specified distance to collision, speed, and object size. Moreover, when we manipulated presented variables to provide conflicting TTC information, TTC specified by tau was weighted much more than TTC derived from distance and speed. In conclusion, our results suggested that even in the presence of other monocular sources of information, observers still had a greater tendency to specifically use optical tau when making relative TTC judgments.
Cyberpsychology, Behavior, and Social Networking | 2003
Hong-Jin Sun; Amanda J. Lee; Jennifer L. Campos; George S. W. Chan; Da-Hui Zhang
This study assessed the relative contributions of visual and proprioceptive/motor information during self-motion in a virtual environment using a speed discrimination task. Subjects wore a head-mounted display and rode a stationary bicycle along a straight path in an empty, seemingly infinite hallway with random surface texture. For each trial, subjects were required to pedal the bicycle along two paths at two different speeds (a standard speed and a comparison speed) and subsequently report whether the second speed travelled was faster than the first. The standard speed remained the same while the comparison speed was varied between trials according to the method of constant stimuli. When visual and proprioceptive/motor cues were provided separately or in combination, the speed discrimination thresholds were comparable, suggesting that either cue alone is sufficient. When the relation between visual and proprioceptive information was made inconsistent by varying optic flow gain, the resulting psychometric functions shifted along the horizontal axis (pedalling speed). The degree of separation between these functions indicated that both optic flow and proprioceptive cues contributed to speed estimation, with proprioceptive cues being dominant. These results suggest an important role for proprioceptive information in speed estimation during self-motion.