Hyung-Chul O. Li
Kwangwoon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hyung-Chul O. Li.
Journal of Broadcast Engineering | 2010
Hyung-Chul O. Li
The methods developed to measure visual fatigue so far are quite few and lack of validity, and more importantly, they do not figure out the complex properties of the visual fatigue. The purpose of the research was to analyze the factors comprising the visual fatigue and to develop the method to measure it validly. The results are summarized as follows. First, we found that the 3D visual fatigue was comprised of four independent factors (visual stress, eye pain, body pain and image blurring factors). Second, we developed 29 items that measure four factors of 3D visual fatigue. Finally, the watching duration and binocular disparities affected the visual fatigue as had been expected. These results imply that the developed method does measure the three dimensional fatigue validly.
Attention Perception & Psychophysics | 2001
Hyung-Chul O. Li; Frederick A. A. Kingdom
Under what circumstances is the common motion of a group of elements more easily perceived when the elements differ in color and/or luminance polarity from their surround? Croner and Albright (1997), using a conventional global motion paradigm, first showed that motion coherence thresholds fell when target and distractor elements were made different in color. However, in their paradigm, there was a cue in the static view of the stimulus as to which elements belonged to the target. Arguably, in order to determine whether the visual system automatically groups, or prefilters, the image into different color maps for motion processing, such static form cues should be eliminated. Using various arrangements of the global motion stimulus in which we eliminated all static form cues, we found that global motion thresholds were no better when target and distractors differed in color than when they were identical, except under certain circumstances in which subjects had prior knowledge of the specific target color. We conclude that, in the absence of either static form cues or the possibility of selective attention to the target color, features with similar colors/luminance-polarities are not automatically grouped for global motion analysis.
Vision Research | 1999
Hyung-Chul O. Li; Frederick A. A. Kingdom
Motion information is important to vision for extracting the 3-D (three-dimensional) structure of an object, as evidenced by the compelling percept of three-dimensionality attainable in displays which are purely motion-defined. It has recently been shown that when subjects view a rotating transparent cylinder of dots simulated with parallel projection, they rarely perceive rotation reversals which are physically introduced (Treue, Andersen, Ando & Hildreth, Vision Research, 35;1995:139-148). We show however that when the elements defining the cylinder are oriented, the number of perceived reversals increases systematically to near maximum as the difference between element orientations on the two surfaces increases. These results imply that structure-from-motion mechanisms are capable of exploiting local feature differences between the different surfaces of a moving object.
Perception | 1998
Hyung-Chul O. Li; Frederick A. A. Kingdom
The aim in the experiments was to examine whether the detection of structure-from-motion (SFM) in noise was facilitated when target and noise were segregated by colour and/or luminance polarity The SFM target was a rotating ‘V-shape’ structure simulated with limited-lifetime Gaussian micropatterns and embedded in random-motion noise. Threshold levels of V-shape slant were measured for stimuli in which target and noise were segregated or unsegregated by colour/luminance, and under two conditions, with and without static form cues to the SFM target. The presence or absence of static form cues to the SFM target was manipulated by varying the relative numbers of micropatterns in target and noise. In the absence of static form cues, segregation of target and noise by colour and/or luminance polarity did not facilitate target detection, even when subjects knew which micropatterns belonged to the target. On the other hand, when static form cues were present, segregation improved performance. These results imply that SFM processing is ‘form-cue invariant’ except when the target form is immediately identifiable in the static view of the stimulus. The significance of the results for understanding the role of colour vision in breaking camouflage and in ‘grouping’ is discussed.
Vision Research | 2002
Hyung-Chul O. Li; Eli Brenner; Frans W. Cornelissen; Eun-Soo Kim
Even when the retinal image of a static scene is constantly shifting, as occurs when the viewer pursues a small moving object with his or her eyes, the scene is usually correctly perceived to be static. Following early suggestions by von Helmholtz, it is commonly believed that this spatial stability is achieved by combining retinal and extra-retinal signals. Here, we report a perceptually salient 2D shape distortion that can arise during pursuit. We provide evidence that the perceived 2D shape reflects retinal image contents alone, implying that the extra-retinal signal is ignored when judging 2D shape.
Scientific Reports | 2016
Elena Gheorghiu; Frederick A. A. Kingdom; Aaron Remkes; Hyung-Chul O. Li; Stéphane Rainville
The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) ‘segregated’ – symmetric blobs were of one color, random blobs of the other color(s); (2) ‘random-segregated’ – as above but with the symmetric color randomly selected on each trial; (3) ‘non-segregated’ – symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) ‘anti-symmetric’ – symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective.
Digital Holography and Three-Dimensional Imaging (2008), paper DWA5 | 2008
Hyung-Chul O. Li; Junho Seo; Keetaek Kham; Seung-Hyun Lee
3D visual fatigue has delayed the widespread use of commercial 3D displays. This study shows how the full range of characteristics of subjective 3D visual fatigue have been measured and proposes a five-factor model.
Vision Research | 2001
Hyung-Chul O. Li; Frederick A. A. Kingdom
A compelling percept of three-dimensionality is attainable from a purely motion-defined simulation of a transparent rotating cylinder, referred to as 3-D structure-from-motion (SFM). Interestingly, subjects rarely perceive reversals of the cylinders direction of rotation when they are introduced. Treue, Andersen, Ando, and Hildreth (Vision Res. 35 (1995) 139-148) have argued that this reflects the visual systems insensitivity to the textural detail on the cylinders motion surfaces. We have recently shown however that with cylinders made from oriented micropatterns, motion reversals are perceived when the orientations of the micropatterns are different on the cylinders front/back surfaces, suggesting that the visual system is sensitive to the type of feature in these stimuli (Vision Res. 39 (1999) 881-886). In the present study we extended this finding by testing for feature-sensitivity along other dimensions besides orientation, specifically spatial frequency, colour and luminance polarity. We found that subjects perceived more rotation direction reversals when the front/back surfaces of the cylinder were segregated, as opposed to non-segregated by feature-type, along all of these dimensions except, notably, colour. We also investigated the stage at which the feature-sensitivity is incorporated in 3-D SFM. We reasoned that if 3-D SFM mechanisms were tuned, or labeled for feature-type, swapping of features during the cylinders rotation would result in illusory reversals in just the feature-segregated condition, whereas if grouping of like-features preceded the formation of 3-D motion surfaces, no such illusory reversals would be expected. We found that feature-swapping resulted in more illusory reversals in the feature-segregated compared to non-segregated conditions, supporting the mechanism tuning, or labeling, hypothesis.
Journal of Broadcast Engineering | 2013
Hyung-Chul O. Li; JongJin Park; ShinWoo Kim
In order to understand 3D visual fatigue, it is necessary to examine the visual fatigue induced by camera parameters as well as that induced by a pre-existing 3D content. In the present study, we examined the effects of camera parameters, such as roll misalignment error, shooting distance and vergence condition on 3D visual fatigue and we modelled it. The results indicate that roll misalignment error, shooting distance and vergence condition affect 3D visual fatigue and the effect of roll misalignment error on 3D visual fatigue is evident specifically when screen disparity is relatively small.
Journal of Broadcast Engineering | 2009
Hyung-Chul O. Li
The 3D shape as well as the depth of an object presented on a 3D display is perceptually distorted depending on viewing distance. It is quite undesirable that different observers perceive different depth and shape from an object displayed on a 3D monitor. To resolve the problem of perceptual distortion of 3D depth and shape, the degree of the distortion should be measured appropriately. As a basis for resolving these problems, the present research suggests an instrument for measuring the degree of the perceptual distortion of 3D depth and shape.