Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chao-Yi Li is active.

Publication


Featured researches published by Chao-Yi Li.


NeuroImage | 2011

Center–surround interaction with adaptive inhibition: A computational model for contour detection

Chi Zeng; Yongjie Li; Chao-Yi Li

The broad region outside the classical receptive field (CRF) of a neuron in the primary visual cortex (V1), namely non-CRF (nCRF), exerts robust modulatory effects on the responses to visual stimuli presented within the CRF. This modulating effect is mostly suppressive, which plays important roles in visual information processing. One possible role is to extract object contours from disorderly background textures. In this study, a two-scale based contour extraction model, inspired by the inhibitory interactions between CRF and nCRF of V1 neurons, is presented. The kernel idea is that the side and end subregions of nCRF work in different manners, i.e., while the strength of side inhibition is consistently calculated just based on the local features in the side regions at a fine spatial scale, the strength of end inhibition adaptively varies in accordance with the local features in both end and side regions at both fine and coarse scales. Computationally, the end regions exert weaker inhibition on CRF at the locations where a meaningful contour more likely exists in the local texture and stronger inhibition at the locations where the texture elements are mainly stochastic. Our results demonstrate that by introducing such an adaptive mechanism into the model, the non-meaningful texture elements are removed dramatically, and at the same time, the object contours are extracted effectively. Besides the superior performance in contour detection over other inhibition-based models, our model provides a better understanding of the roles of nCRF and has potential applications in computer vision and pattern recognition.


international conference on computer vision | 2013

A Color Constancy Model with Double-Opponency Mechanisms

Shaobing Gao; Kaifu Yang; Chao-Yi Li; Yongjie Li

The double-opponent color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. We introduce a new color constancy model by imitating the functional properties of the HVS from the retina to the double-opponent cells in V1. The idea behind the model originates from the observation that the color distribution of the responses of double-opponent cells to the input color-biased images coincides well with the light source direction. Then the true illuminant color of a scene is easily estimated by searching for the maxima of the separate RGB channels of the responses of double-opponent cells in the RGB space. Our systematical experimental evaluations on two commonly used image datasets show that the proposed model can produce competitive results in comparison to the complex state-of-the-art approaches, but with a simple implementation and without the need for training.


computer vision and pattern recognition | 2013

Efficient Color Boundary Detection with Color-Opponent Mechanisms

Kaifu Yang; Shaobing Gao; Chao-Yi Li; Yongjie Li

Color information plays an important role in better understanding of natural scenes by at least facilitating discriminating boundaries of objects or areas. In this study, we propose a new framework for boundary detection in complex natural scenes based on the color-opponent mechanisms of the visual system. The red-green and blue-yellow color opponent channels in the human visual system are regarded as the building blocks for various color perception tasks such as boundary detection. The proposed framework is a feed forward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to the primary visual cortex (V1). Results show that our simple framework has excellent ability to flexibly capture both the structured chromatic and achromatic boundaries in complex scenes.


IEEE Transactions on Image Processing | 2014

Multifeature-Based Surround Inhibition Improves Contour Detection in Natural Images

Kaifu Yang; Chao-Yi Li; Yongjie Li

To effectively perform visual tasks like detecting contours, the visual system normally needs to integrate multiple visual features. Sufficient physiological studies have revealed that for a large number of neurons in the primary visual cortex (V1) of monkeys and cats, neuronal responses elicited by the stimuli placed within the classical receptive field (CRF) are substantially modulated, normally inhibited, when difference exists between the CRF and its surround, namely, non-CRF, for various local features. The exquisite sensitivity of V1 neurons to the center-surround stimulus configuration is thought to serve important perceptual functions, including contour detection. In this paper, we propose a biologically motivated model to improve the performance of perceptually salient contour detection. The main contribution is the multifeature-based center-surround framework, in which the surround inhibition weights of individual features, including orientation, luminance, and luminance contrast, are combined according to a scale-guided strategy, and the combined weights are then used to modulate the final surround inhibition of the neurons. The performance was compared with that of single-cue-based models and other existing methods (especially other biologically motivated ones). The results show that combining multiple cues can substantially improve the performance of contour detection compared with the models using single cue. In general, luminance and luminance contrast contribute much more than orientation to the specific task of contour extraction, at least in gray-scale natural images.


Neurocomputing | 2011

Contour detection based on a non-classical receptive field model with butterfly-shaped inhibition subregions

Chi Zeng; Yongjie Li; Kaifu Yang; Chao-Yi Li

Physiological studies show that the response of classical receptive field (CRF) to visual stimulus could be suppressed by non-classical receptive field (NCRF) inhibition of the neurons in primary visual cortex (V1) and most of CRFs and NCRFs in V1 are orientation-selective. In addition, surround inhibition is normally spatially asymmetric. Inspired by these visual mechanisms, we proposed a feasible contour detection method based on an improved orientation-selective inhibition model in this paper. A butterfly-formed surrounding area is employed for the computation of inhibition term, and only one side subregion that produces less inhibition contributes to cells response, which could provide a flexible inhibitory effect for the NCRF modulation on CRF. Comparisons with other visual contour detection models show that the proposed model can suppress texture effectively while retaining contours as much as possible.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Color Constancy Using Double-Opponency

Shaobing Gao; Kaifu Yang; Chao-Yi Li; Yongjie Li

The double-opponent (DO) color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. In this work we propose a new color constancy model by imitating the functional properties of the HVS from the single-opponent (SO) cells in the retina to the DO cells in V1 and the possible neurons in the higher visual cortexes. The idea behind the proposed double-opponency based color constancy (DOCC) model originates from the substantial observation that the color distribution of the responses of DO cells to the color-biased images coincides well with the vector denoting the light source color. Then the illuminant color is easily estimated by pooling the responses of DO cells in separate channels in LMS space with the pooling mechanism of sum or max. Extensive evaluations on three commonly used datasets, including the test with the dataset dependent optimal parameters, as well as the intraand inter-dataset cross validation, show that our physiologically inspired DOCC model can produce quite competitive results in comparison to the state-of-the-art approaches, but with a relative simple implementation and without requiring fine-tuning of the method for each different dataset.


IEEE Transactions on Image Processing | 2015

Boundary Detection Using Double-Opponency and Spatial Sparseness Constraint

Kaifu Yang; Shaobing Gao; Ce-Feng Guo; Chao-Yi Li; Yongjie Li

Brightness and color are two basic visual features integrated by the human visual system (HVS) to gain a better understanding of color natural scenes. Aiming to combine these two cues to maximize the reliability of boundary detection in natural scenes, we propose a new framework based on the color-opponent mechanisms of a certain type of color-sensitive double-opponent (DO) cells in the primary visual cortex (V1) of HVS. This type of DO cells has oriented receptive field with both chromatically and spatially opponent structure. The proposed framework is a feedforward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to V1. In addition, we employ the spatial sparseness constraint (SSC) of neural responses to further suppress the unwanted edges of texture elements. Experimental results show that the DO cells we modeled can flexibly capture both the structured chromatic and achromatic boundaries of salient objects in complex scenes when the cone inputs to DO cells are unbalanced. Meanwhile, the SSC operator further improves the performance by suppressing redundant texture edges. With competitive contour detection accuracy, the proposed model has the additional advantage of quite simple implementation with low computational cost.Brightness and color are two basic visual features integrated by the human visual system (HVS) to gain a better understanding of color natural scenes. Aiming to combine these two cues to maximize the reliability of boundary detection in natural scenes, we propose a new framework based on the color-opponent mechanisms of a certain type of color-sensitive double-opponent (DO) cells in the primary visual cortex (V1) of HVS. This type of DO cells has oriented receptive field with both chromatically and spatially opponent structure. The proposed framework is a feedforward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to V1. In addition, we employ the spatial sparseness constraint (SSC) of neural responses to further suppress the unwanted edges of texture elements. Experimental results show that the DO cells we modeled can flexibly capture both the structured chromatic and achromatic boundaries of salient objects in complex scenes when the cone inputs to DO cells are unbalanced. Meanwhile, the SSC operator further improves the performance by suppressing redundant texture edges. With competitive contour detection accuracy, the proposed model has the additional advantage of quite simple implementation with low computational cost.


european conference on computer vision | 2014

Efficient Color Constancy with Local Surface Reflectance Statistics

Shaobing Gao; Wangwang Han; Kaifu Yang; Chao-Yi Li; Yongjie Li

The aim of computational color constancy is to estimate the actual surface color in an acquired scene disregarding its illuminant. Many solutions try to first estimate the illuminant and then correct the image with the illuminant estimate. Based on the linear image formation model, we propose in this work a new strategy to estimate the illuminant. Inspired by the feedback modulation from horizontal cells to the cones in the retina, we first normalize each local patch with its local maximum to obtain the so-called locally normalized reflectance estimate (LNRE). Then, we experimentally found that the ratio of the global summation of true surface reflectance to the global summation of LNRE in a scene is approximately achromatic for both indoor and outdoor scenes. Based on this substantial observation, we estimate the illuminant by computing the ratio of the global summation of the intensities to the global summation of the locally normalized intensities of the color-biased image. The proposed model has only one free parameter and requires no explicit training with learning-based approach. Experimental results on four commonly used datasets show that our model can produce competitive or even better results compared to the state-of-the-art approaches with low computational cost.


Frontiers in Computational Neuroscience | 2015

A Retina Inspired Model for Enhancing Visibility of Hazy Images

Xian-Shi Zhang; Shaobing Gao; Chao-Yi Li; Yongjie Li

The mammalian retina seems far smarter than scientists have believed so far. Inspired by the visual processing mechanisms in the retina, from the layer of photoreceptors to the layer of retinal ganglion cells (RGCs), we propose a computational model for haze removal from a single input image, which is an important issue in the field of image enhancement. In particular, the bipolar cells serve to roughly remove the low-frequency of haze, and the amacrine cells modulate the output of cone bipolar cells to compensate the loss of details by increasing the image contrast. Then the RGCs with disinhibitory receptive field surround refine the local haze removal as well as the image detail enhancement. Results on a variety of real-world and synthetic hazy images show that the proposed model yields results comparative to or even better than the state-of-the-art methods, having the advantage of simultaneous dehazing and enhancing of single hazy image with simple and straightforward implementation.


Journal of The Optical Society of America A-optics Image Science and Vision | 2017

Improving color constancy by discounting the variation of camera spectral sensitivity

Shaobing Gao; Ming Zhang; Chao-Yi Li; Yongjie Li

It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color-biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color-biased images under CSS-2 without the need of burdensome acquiring of the training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that, by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.

Collaboration


Dive into the Chao-Yi Li's collaboration.

Top Co-Authors

Avatar

Yongjie Li

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Kaifu Yang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Shaobing Gao

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Chi Zeng

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Hui Li

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Teng Qiu

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Xian-Shi Zhang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ce-Feng Guo

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ming Zhang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ruo-Xuan Li

University of Electronic Science and Technology of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge