Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Li Zhaoping is active.

Publication


Featured researches published by Li Zhaoping.


Neuron | 2005

Border Ownership from Intracortical Interactions in Visual Area V2

Li Zhaoping

A border between two image regions normally belongs to only one of the regions; determining which one it belongs to is essential for surface perception and figure-ground segmentation. Border ownership is signaled by a class of V2 neurons, even though its value depends on information coming from well outside their classical receptive fields. I use a model of V2 to show that this visual area is able to generate the ownership signal by itself, without requiring any top-down mechanism or external explicit labels for figures, T junctions, or corners. In the model, neurons have spatially local classical receptive fields, are tuned to orientation, and receive information (from V1) about the location and orientation of borders. Border ownership signals that model physiological observations arise through finite range, intraareal interactions. Additional effects from surface features and attention are discussed. The model licenses testable predictions.


Neuron | 2012

Neural Activities in V1 Create a Bottom-Up Saliency Map

Xilin Zhang; Li Zhaoping; Tiangang Zhou; Fang Fang

The bottom-up contribution to the allocation of exogenous attention is a saliency map, whose neural substrate is hard to identify because of possible contamination by top-down signals. We obviated this possibility using stimuli that observers could not perceive, but that nevertheless, through orientation contrast between foreground and background regions, attracted attention to improve a localized visual discrimination. When orientation contrast increased, so did the degree of attraction, and two physiological measures: the amplitude of the earliest (C1) component of the ERP, which is associated with primary visual cortex, and fMRI BOLD signals in areas V1-V4 (but not the intraparietal sulcus). Significantly, across observers, the degree of attraction correlated with the C1 amplitude and just the V1 BOLD signal. These findings strongly support the proposal that a bottom-up saliency map is created in V1, challenging the dominant view that the saliency map is generated in the parietal cortex.


Network: Computation In Neural Systems | 2006

Theoretical understanding of the early visual processes by data compression and data selection

Li Zhaoping

Early vision is best understood in terms of two key information bottlenecks along the visual pathway — the optic nerve and, more severely, attention. Two effective strategies for sampling and representing visual inputs in the light of the bottlenecks are () data compression with minimum information loss and () data deletion. This paper reviews two lines of theoretical work which understand processes in retina and primary visual cortex (V1) in this framework. The first is an efficient coding principle which argues that early visual processes compress input into a more efficient form to transmit as much information as possible through channels of limited capacity. It can explain the properties of visual sampling and the nature of the receptive fields of retina and V1. It has also been argued to reveal the independent causes of the inputs. The second theoretical tack is the hypothesis that neural activities in V1 represent the bottom up saliencies of visual inputs, such that information can be selected for, or discarded from, detailed or attentive processing. This theory links V1 physiology with pre-attentive visual selection behavior. By making experimentally testable predictions, the potentials and limitations of both sets of theories can be explored.


PLOS Computational Biology | 2005

Psychophysical Tests of the Hypothesis of a Bottom-Up Saliency Map in Primary Visual Cortex

Li Zhaoping; Keith A. May

A unique vertical bar among horizontal bars is salient and pops out perceptually. Physiological data have suggested that mechanisms in the primary visual cortex (V1) contribute to the high saliency of such a unique basic feature, but indicated little regarding whether V1 plays an essential or peripheral role in input-driven or bottom-up saliency. Meanwhile, a biologically based V1 model has suggested that V1 mechanisms can also explain bottom-up saliencies beyond the pop-out of basic features, such as the low saliency of a unique conjunction feature such as a red vertical bar among red horizontal and green vertical bars, under the hypothesis that the bottom-up saliency at any location is signaled by the activity of the most active cell responding to it regardless of the cells preferred features such as color and orientation. The model can account for phenomena such as the difficulties in conjunction feature search, asymmetries in visual search, and how background irregularities affect ease of search. In this paper, we report nontrivial predictions from the V1 saliency hypothesis, and their psychophysical tests and confirmations. The prediction that most clearly distinguishes the V1 saliency hypothesis from other models is that task-irrelevant features could interfere in visual search or segmentation tasks which rely significantly on bottom-up saliency. For instance, irrelevant colors can interfere in an orientation-based task, and the presence of horizontal and vertical bars can impair performance in a task based on oblique bars. Furthermore, properties of the intracortical interactions and neural selectivities in V1 predict specific emergent phenomena associated with visual grouping. Our findings support the idea that a bottom-up saliency map can be at a lower visual area than traditionally expected, with implications for top-down selection mechanisms.


Current Biology | 2003

Top-Down Modulation of Lateral Interactions in Early Vision: Does Attention Affect Integration of the Whole or Just Perception of the Parts?

Elliot Freeman; Jon Driver; Dov Sagi; Li Zhaoping

Attention can modulate sensitivity to local stimuli in early vision. But, can attention also modulate integration of local stimuli into global visual patterns? We recently measured effects of attention on the phenomenon of lateral interactions between collinear elements, commonly thought to reflect long-range mechanisms in early visual cortex underlying contour integration. We showed improved detection of low-contrast central Gabor targets in the context of collinear flankers, but only when the collinear flankers were attended for a secondary task rather than ignored in favor of an orthogonal flanker pair. Here, we contrast two hypotheses for how attention might modulate flanker influences on the target: by changing just local sensitivity to the flankers themselves (flanker-modulation-only hypothesis), or by weighting integrative connections between flanker and target (connection-weighting hypothesis). Modeled on the known nonlinear dependence of target visibility on collinear flanker contrast, the first hypothesis predicts that an increase in physical flanker contrast should readily offset any reduction in their effective contrast when ignored, thus eliminating attentional modulation. Conversely, the second hypothesis predicts that attentional modulation should persist even for the highest flanker contrasts. Our results showed the latter outcome and indicated that attention modulates flanker-target integration, rather than just processing of local flanker elements.


Journal of Physiology-paris | 2003

V1 mechanisms and some figure–ground and border effects

Li Zhaoping

V1 neurons have been observed to respond more strongly to figure than background regions. Within a figure region, the responses are usually stronger near figure boundaries (the border effect), than further inside the boundaries. Sometimes the medial axes of the figures (e.g., the vertical midline of a vertical figure strip) induce secondary, intermediate, response peaks (the medial axis effect). Related is the physiologically elusive “cross-orientation facilitation”, the observation that a cells response to a grating patch can be facilitated by an orthogonally oriented grating in the surround. Higher center feedbacks have been suggested to cause these figure–ground effects. It has been shown, using a V1 model, that the causes could be intra-cortical interactions within V1 that serve pre-attentive visual segmentation, particularly, object boundary detection. Furthermore, whereas the border effect is robust, the figure–ground effects in the interior of a figure, in particular, the medial axis effect, are by-products of the border effect and are predicted to diminish to zero for larger figures. This model prediction (of the figure size dependence) was subsequently confirmed physiologically, and supported by findings that the response modulations by texture surround do not depend on feedbacks from V2. In addition, the model explains the “cross-orientation facilitation” as caused by a dis-inhibition, to the cell responding to the center of the central grating, by the background grating. Furthermore, the elusiveness of this phenomena was accounted for by the insight that it depends critically on the size of the figure grating. The model is applied to understand some figure–ground effects and segmentation in psychophysics: in particular, that contrast discrimination threshold is lower within and at the center of a closed contour than that in the background, and that a very briefly presented vernier target can perceptually shine through a subsequently presented large grating centered at the same location.


Vision Research | 2006

Perceptual learning with spatial uncertainties

Thomas U. Otto; Michael H. Herzog; Manfred Fahle; Li Zhaoping

In perceptual learning, stimuli are usually assumed to be presented to a constant retinal location during training. However, due to tremor, drift, and microsaccades of the eyes, the same stimulus covers different retinal positions on sequential trials. Because of these variations the mathematical decision problem changes from linear to non-linear (). This non-linearity implies three predictions. First, varying the spatial position of a stimulus within a moderate range does not deteriorate perceptual learning. Second, improvement for one stimulus variant can yield negative transfer to other variants. Third, interleaved training with two stimulus variants yields no or strongly diminished learning. Using a bisection task, we found psychophysical evidence for the first and last prediction. However, no negative transfer was found as opposed to the second prediction.


Visual Cognition | 2006

A theory of a saliency map in primary visual cortex (V1) tested by psychophysics of colour-orientation interference in texture segmentation

Li Zhaoping; Robert Jefferson Snowden

It has been proposed that V1 creates a bottom-up saliency map, where saliency of any location increases with the firing rate of the most active V1 output cell responding to it, regardless the feature selectivity of the cell. Thus, a red vertical bar may have its saliency signalled by a cell tuned to red colour, or one tuned to vertical orientation, whichever cell is the most active. This theory predicts interference between colour and orientation features in texture segmentation tasks where bottom-up processes are significant. The theory not only explains existing data, but also provides a prediction. A subsequent psychophysical test confirmed the prediction by showing that segmentation of textures of oriented bars became more difficult as the colours of the bars were randomly drawn from more colour categories.


Neurobiology of Attention | 2005

The primary visual cortex creates a bottom-up saliency map

Li Zhaoping

It has been proposed that the primary visual cortex (V1) creates a saliency map using autonomous intra-cortical mechanisms. This saliency of a visual location describes the locations ability to attract attention without top-down factors. It increases monotonously with the firing rate of the most active V1 cell responding to that location. Given the prevalent feature selectivities of V1 cells (many tuned to more than one feature dimension), no separate feature maps, or any subsequent combinations of them, are needed to create a saliency map. This proposal has been demonstrated in a biologically based V1 model. By relating the saliencies of the visual search targets or object (texture) boundaries to the eases of the visual search or segmentation tasks, the model accounted for behavioral data such as how task difficulties can be influenced by image features and their spatial configurations. This proposal links physiology with psychophysics, thereby making testable predictions, some of which are subsequently confirmed experimentally.


Current Biology | 2007

Interference with Bottom-Up Feature Detection by Higher-Level Object Recognition

Li Zhaoping; Nathalie Guyader

Drawing portraits upside down is a trick that allows novice artists to reproduce lower-level image features, e.g., contours, while reducing interference from higher-level face cognition. Limiting the available processing time to suffice for lower- but not higher-level operations is a more general way of reducing interference. We elucidate this interference in a novel visual-search task to find a target among distractors. The target had a unique lower-level orientation feature but was identical to distractors in its higher-level object shape. Through bottom-up processes, the unique feature attracted gaze to the target. Subsequently, recognizing the attended object as identically shaped as the distractors, viewpoint invariant object recognition interfered. Consequently, gaze often abandoned the target to search elsewhere. If the search stimulus was extinguished at time T after the gaze arrived at the target, reports of target location were more accurate for shorter (T<500 ms) presentations. This object-to-feature interference, though perhaps unexpected, could underlie common phenomena such as the visual-search asymmetry that finding a familiar letter N among its mirror images is more difficult than the converse. Our results should enable additional examination of known phenomena and interactions between different levels of visual processes.

Collaboration


Dive into the Li Zhaoping's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Lewis

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Dayan

University College London

View shared research outputs
Top Co-Authors

Avatar

Wu Li

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge