Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zili Liu is active.

Publication


Featured researches published by Zili Liu.


Vision Research | 1995

Object classification for human and ideal observers

Zili Liu; David C. Knill; Daniel Kersten

We describe a novel approach, based on ideal observer analysis, for measuring the ability of human observers to use image information for 3D object perception. We compute the statistical efficiency of subjects relative to an ideal observer for a 3D object classification task. After training to 11 different views of a randomly shaped thick wire object, subjects were asked which of a pair of noisy views of the object best matched the learned object. Efficiency relative to the actual information in the stimuli can be as high as 20%. Increases in object regularity (e.g. symmetry) lead to increases in the efficiency with which novel views of an object could be classified. Furthermore, such increases in regularity also lead to decreases in the effect of viewpoint on classification efficiency. Human statistical efficiencies relative to a 2D ideal observer exceeded 100%, thereby excluding all models which are sub-optimal relative to the 2D ideal.


Vision Research | 1999

The role of convexity in perceptual completion: beyond good continuation

Zili Liu; David W. Jacobs; Ronen Basri

Since the seminal work of the Gestalt psychologists, there has been great interest in understanding what factors determine the perceptual organization of images. While the Gestaltists demonstrated the significance of grouping cues such as similarity, proximity and good continuation, it has not been well understood whether their catalog of grouping cues is complete--in part due to the paucity of effective methodologies for examining the significance of various grouping cues. We describe a novel, objective method to study perceptual grouping of planar regions separated by an occluder. We demonstrate that the stronger the grouping between two such regions, the harder it will be to resolve their relative stereoscopic depth. We use this new method to call into question many existing theories of perceptual completion (Ullman, S. (1976). Biological Cybernetics, 25, 1-6; Shashua, A., & Ullman, S. (1988). 2nd International Conference on Computer Vision (pp. 321-327); Parent, P., & Zucker, S. (1989). IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 823-839; Kellman, P. J., & Shipley, T. F. (1991). Cognitive psychology, Liveright, New York; Heitger, R., & von der Heydt, R. (1993). A computational model of neural contour processing, figure-ground segregation and illusory contours. In Internal Conference Computer Vision (pp. 32-40); Mumford, D. (1994). Algebraic geometry and its applications, Springer, New York; Williams, L. R., & Jacobs, D. W. (1997). Neural Computation, 9, 837-858) that are based on Gestalt grouping cues by demonstrating that convexity plays a strong role in perceptual completion. In some cases convexity dominates the effects of the well known Gestalt cue of good continuation. While convexity has been known to play a role in figure/ground segmentation (Rubin, 1927; Kanizsa & Gerbino, 1976), this is the first demonstration of its importance in perceptual completion.


Cognitive Brain Research | 1998

Simultaneous learning of motion discrimination in two directions

Zili Liu; Lucia M. Vaina

We take issue with theories about the direction specificity in perceptual learning of motion discrimination. Trials of motion discrimination in two opposite directions were interleaved in uneven proportions (2:1). Human subjects improved faster in the direction with less frequent trials, indicating that learning transferred from the more frequent to the less frequent direction.


Journal of Vision | 2006

Computing dynamic classification images from correlation maps.

Hongjing Lu; Zili Liu

We used Pearsons correlation to compute dynamic classification images of biological motion in a point-light display. Observers discriminated whether a human figure that was embedded in dynamic white Gaussian noise was walking forward or backward. Their responses were correlated with the Gaussian noise fields frame by frame, across trials. The resultant correlation map gave rise to a sequence of dynamic classification images that were clearer than either the standard method of A. J. Ahumada and J. Lovell (1971) or the optimal weighting method of R. F. Murray, P. J. Bennett, and A. B. Sekuler (2002). Further, the correlation coefficients of all the point lights were similar to each other when overlapping pixels between forward and backward walkers were excluded. This pattern is consistent with the hypothesis that the point-light walker is represented in a global manner, as opposed to a fixed subset of point lights being more important than others. We conjecture that the superior performance of the correlation map may reflect inherent nonlinearities in processing biological motion, which are incompatible with the assumptions underlying the previous methods.


Journal of Vision | 2007

Motion perceptual learning: When only task-relevant information is learned

Xuan Huang; Hongjing Lu; Bosco S. Tjan; Yifeng Zhou; Zili Liu

The classic view that perceptual learning is information selective and goal directed has been challenged by recent findings showing that subthreshold and task-irrelevant information can induce perceptual learning. This study demonstrates a limit on task-irrelevant learning as exposure to suprathreshold task-irrelevant signals failed to induce perceptual learning. In each trial, two random-dot motion stimuli were presented in a two-alternative forced-choice task. Observers either decided which of the two contained a coherent motion signal (detection task), or whether the coherent motion direction was clockwise or counterclockwise relative to a reference direction (discrimination task). Whereas the exact direction of the coherent motion signal was irrelevant to the detection task, detection of the coherent motion signal was necessary for the discrimination task. We found that the detection trainees improved only their detection but not discrimination sensitivity, whereas the discrimination trainees improved both. Therefore, the importance of task relevance was demonstrated in both detection and discrimination learning. Furthermore, both detection and discrimination training along a single pedestal direction transferred to a broad range of pedestal directions. The profile of the discrimination transfer (as a function of pedestal direction) narrowed for the discrimination trainees.


Spatial Vision | 1996

Viewpoint dependency in object representation and recognition

Zili Liu

In order to recognize an object from a certain viewpoint, it is necessary for the objects image from this viewpoint to match the objects representation in memory. Clearly, both information from this image and the object representation in memory determine recognition performance. However, the current debate on the viewpoint dependency of internal representations focuses only on the viewpoint dependency of the performance, and equates it directly with viewpoint dependency of the representation, excluding the possibility that a viewpoint-independent representation can also give rise to a viewpoint-dependent performance. In the latter case, the performance will only depend on the image information from the viewpoint, which is determined by the three-dimensional (3D) shape of the object, irrespective of how the internal representation is learnt. A recognition performance that depends on the views from which the representation is learnt will provide strong evidence against any viewpoint-independent representations. Such an implication was supported from this study with 3D natural objects. In particular, performance from views that shared the same visible major object parts was systematically different a result which is not predicted by the Recognition-By-Components theory (Biederman, Psychol. Rev. 94. 115-147, 1987).


Journal of Vision | 2008

Perceiving an object in its context—is the context cultural or perceptual?

Jiawei Zhou; Carrie Gotch; Yifeng Zhou; Zili Liu

S. Kitayama, S. Duffy, T. Kawamura, and J. T. Larsen (2003) found that East Asians, when shown a line inside a square, memorized more accurately the ratio of the lines length relative to the square than the lines absolute length, whereas North Americans showed the opposite results. Because of this studys important implications on cultural influence to visual perception, we attempted to replicate it in China and USA, without success. Our 120 participants as a whole estimated a lines relative length more accurately than its absolute length, regardless of culture. Our results can be explained by the advantage of an explicit frame of reference in the ratio estimation, an advantage well known in the literature. Namely, the square as a frame of reference is more useful in the relative than in the absolute estimation of the lines length when the size of square changed from study to recall.


Vision Research | 2006

Learning motion discrimination with suppressed and un-suppressed MT

Benjamin Thompson; Zili Liu

Perceptual learning of motion direction discrimination is generally thought to rely on the middle temporal area of the brain (MT/V5). A recent study investigating learning of motion discrimination when MT was psychophysically suppressed found that learning was possible with suppressed MT, but only when the task was sufficiently easy [Lu, H., Qian, N., Liu, Z. (2004). Learning motion discrimination with suppressed MT. Vision Research 44, 1817-1825]. We investigated whether this effect was indeed due to MT suppression or whether it could be explained by task difficulty alone. By comparing learning of motion discrimination when MT was suppressed vs. un-suppressed, at different task difficulties, we found that task difficulty alone could not explain the effects. At the highest difficulty, learning was not possible with suppressed MT, confirming [Lu, H., Qian, N., Liu, Z. (2004). Learning motion discrimination with suppressed MT. Vision Research 44, 1817-1825]. In comparison, learning was possible with un-suppressed MT at the same difficulty level. At the intermediate task difficulty, there was a clear learning disadvantage when MT was suppressed. Only for the easiest level of difficulty, did learning become equally possible for both suppressed and un-suppressed conditions. These findings suggest that MT plays an important role in learning to discriminate relatively fine differences in motion direction.


Vision Research | 2001

The effect of orientation learning on contrast sensitivity

Nestor Matthews; Zili Liu; Ning Qian

Regan and Beverley [Regan, D., & Beverley, K. I. (1985). Postadaptation orientation discrimination. Journal of the Optical Society of America A, 2(2), 147-155] previously demonstrated that adapting to an oriented visual stimulus improves sensitivity to subtle orientation differences while impairing contrast sensitivity. Here, we investigated whether practice-based improvements in orientation sensitivity would, like adaptation, impair contrast sensitivity. To the contrary, we found that contrast sensitivity actually improved significantly after observers demonstrated practice-based increases in orientation sensitivity. Therefore, while orientation sensitivity can be enhanced either by orientation-discrimination training or by adapting to visual stimuli, these two procedures have opposite effects on contrast sensitivity. This difference suggests that adaptation and perceptual learning on orientation discrimination cannot be explained sufficiently by a shared underlying cause, such as a reduction in neural activity.


Journal of Vision | 2005

Symmetry impedes symmetry discrimination

Bosco S. Tjan; Zili Liu

Objects in the world, natural and artificial alike, are often bilaterally symmetric. The visual system is likely to take advantage of this regularity to encode shapes for efficient object recognition. The nature of encoding a symmetric shape, and of encoding any departure from it, is therefore an important matter in visual perception. We addressed this issue of shape encoding empirically, noting that a particular encoding scheme necessarily leads to a specific profile of sensitivity in perceptual discriminations. We studied symmetry discrimination using human faces and random dots. Each face stimulus was a frontal view of a three-dimensional (3-D) face model. The 3-D face model was a linearly weighted average (a morph) between the model of an original face and that of the corresponding mirror face. Using this morphing technique to vary the degree of asymmetry, we found that, for faces and analogously generated random-dot patterns alike, symmetry discrimination was worst when the stimuli were nearly symmetric, in apparent opposition to almost all studies in the literature. We analyzed the previous work and reconciled the old and new results using a generic model with a simple nonlinearity. By defining asymmetry as the minimal difference between the left and right halves of an object, we found that the visual system was disproportionately more sensitive to larger departures from symmetry than to smaller ones. We further demonstrated that our empirical and modeling results were consistent with Weber-Fechners and Stevenss laws.

Collaboration


Dive into the Zili Liu's collaboration.

Top Co-Authors

Avatar

Hongjing Lu

University of California

View shared research outputs
Top Co-Authors

Avatar

Yifeng Zhou

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Bosco S. Tjan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bas Rokers

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan L. Yuille

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge