Shaobing Gao
University of Electronic Science and Technology of China
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shaobing Gao.
international conference on computer vision | 2013
Shaobing Gao; Kaifu Yang; Chao-Yi Li; Yongjie Li
The double-opponent color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. We introduce a new color constancy model by imitating the functional properties of the HVS from the retina to the double-opponent cells in V1. The idea behind the model originates from the observation that the color distribution of the responses of double-opponent cells to the input color-biased images coincides well with the light source direction. Then the true illuminant color of a scene is easily estimated by searching for the maxima of the separate RGB channels of the responses of double-opponent cells in the RGB space. Our systematical experimental evaluations on two commonly used image datasets show that the proposed model can produce competitive results in comparison to the complex state-of-the-art approaches, but with a simple implementation and without the need for training.
computer vision and pattern recognition | 2013
Kaifu Yang; Shaobing Gao; Chao-Yi Li; Yongjie Li
Color information plays an important role in better understanding of natural scenes by at least facilitating discriminating boundaries of objects or areas. In this study, we propose a new framework for boundary detection in complex natural scenes based on the color-opponent mechanisms of the visual system. The red-green and blue-yellow color opponent channels in the human visual system are regarded as the building blocks for various color perception tasks such as boundary detection. The proposed framework is a feed forward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to the primary visual cortex (V1). Results show that our simple framework has excellent ability to flexibly capture both the structured chromatic and achromatic boundaries in complex scenes.
computer vision and pattern recognition | 2015
Kaifu Yang; Shaobing Gao; Yongjie Li
Illuminant estimation is a key step for computational color constancy. Instead of using the grey world or grey edge assumptions, we propose in this paper a novel method for illuminant estimation by using the information of grey pixels detected in a given color-biased image. The underlying hypothesis is that most of the natural images include some detectable pixels that are at least approximately grey, which can be reliably utilized for illuminant estimation. We first validate our assumption through comprehensive statistical evaluation on diverse collection of datasets and then put forward a novel grey pixel detection method based on the illuminant-invariant measure (IIM) in three logarithmic color channels. Then the light source color of a scene can be easily estimated from the detected grey pixels. Experimental results on four benchmark datasets (three recorded under single illuminant and one under multiple illuminants) show that the proposed method outperforms most of the state-of-the-art color constancy approaches with the inherent merit of low computational cost.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015
Shaobing Gao; Kaifu Yang; Chao-Yi Li; Yongjie Li
The double-opponent (DO) color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. In this work we propose a new color constancy model by imitating the functional properties of the HVS from the single-opponent (SO) cells in the retina to the DO cells in V1 and the possible neurons in the higher visual cortexes. The idea behind the proposed double-opponency based color constancy (DOCC) model originates from the substantial observation that the color distribution of the responses of DO cells to the color-biased images coincides well with the vector denoting the light source color. Then the illuminant color is easily estimated by pooling the responses of DO cells in separate channels in LMS space with the pooling mechanism of sum or max. Extensive evaluations on three commonly used datasets, including the test with the dataset dependent optimal parameters, as well as the intraand inter-dataset cross validation, show that our physiologically inspired DOCC model can produce quite competitive results in comparison to the state-of-the-art approaches, but with a relative simple implementation and without requiring fine-tuning of the method for each different dataset.
IEEE Transactions on Image Processing | 2015
Kaifu Yang; Shaobing Gao; Ce-Feng Guo; Chao-Yi Li; Yongjie Li
Brightness and color are two basic visual features integrated by the human visual system (HVS) to gain a better understanding of color natural scenes. Aiming to combine these two cues to maximize the reliability of boundary detection in natural scenes, we propose a new framework based on the color-opponent mechanisms of a certain type of color-sensitive double-opponent (DO) cells in the primary visual cortex (V1) of HVS. This type of DO cells has oriented receptive field with both chromatically and spatially opponent structure. The proposed framework is a feedforward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to V1. In addition, we employ the spatial sparseness constraint (SSC) of neural responses to further suppress the unwanted edges of texture elements. Experimental results show that the DO cells we modeled can flexibly capture both the structured chromatic and achromatic boundaries of salient objects in complex scenes when the cone inputs to DO cells are unbalanced. Meanwhile, the SSC operator further improves the performance by suppressing redundant texture edges. With competitive contour detection accuracy, the proposed model has the additional advantage of quite simple implementation with low computational cost.Brightness and color are two basic visual features integrated by the human visual system (HVS) to gain a better understanding of color natural scenes. Aiming to combine these two cues to maximize the reliability of boundary detection in natural scenes, we propose a new framework based on the color-opponent mechanisms of a certain type of color-sensitive double-opponent (DO) cells in the primary visual cortex (V1) of HVS. This type of DO cells has oriented receptive field with both chromatically and spatially opponent structure. The proposed framework is a feedforward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to V1. In addition, we employ the spatial sparseness constraint (SSC) of neural responses to further suppress the unwanted edges of texture elements. Experimental results show that the DO cells we modeled can flexibly capture both the structured chromatic and achromatic boundaries of salient objects in complex scenes when the cone inputs to DO cells are unbalanced. Meanwhile, the SSC operator further improves the performance by suppressing redundant texture edges. With competitive contour detection accuracy, the proposed model has the additional advantage of quite simple implementation with low computational cost.
european conference on computer vision | 2014
Shaobing Gao; Wangwang Han; Kaifu Yang; Chao-Yi Li; Yongjie Li
The aim of computational color constancy is to estimate the actual surface color in an acquired scene disregarding its illuminant. Many solutions try to first estimate the illuminant and then correct the image with the illuminant estimate. Based on the linear image formation model, we propose in this work a new strategy to estimate the illuminant. Inspired by the feedback modulation from horizontal cells to the cones in the retina, we first normalize each local patch with its local maximum to obtain the so-called locally normalized reflectance estimate (LNRE). Then, we experimentally found that the ratio of the global summation of true surface reflectance to the global summation of LNRE in a scene is approximately achromatic for both indoor and outdoor scenes. Based on this substantial observation, we estimate the illuminant by computing the ratio of the global summation of the intensities to the global summation of the locally normalized intensities of the color-biased image. The proposed model has only one free parameter and requires no explicit training with learning-based approach. Experimental results on four commonly used datasets show that our model can produce competitive or even better results compared to the state-of-the-art approaches with low computational cost.
Frontiers in Computational Neuroscience | 2015
Xian-Shi Zhang; Shaobing Gao; Chao-Yi Li; Yongjie Li
The mammalian retina seems far smarter than scientists have believed so far. Inspired by the visual processing mechanisms in the retina, from the layer of photoreceptors to the layer of retinal ganglion cells (RGCs), we propose a computational model for haze removal from a single input image, which is an important issue in the field of image enhancement. In particular, the bipolar cells serve to roughly remove the low-frequency of haze, and the amacrine cells modulate the output of cone bipolar cells to compensate the loss of details by increasing the image contrast. Then the RGCs with disinhibitory receptive field surround refine the local haze removal as well as the image detail enhancement. Results on a variety of real-world and synthetic hazy images show that the proposed model yields results comparative to or even better than the state-of-the-art methods, having the advantage of simultaneous dehazing and enhancing of single hazy image with simple and straightforward implementation.
Journal of The Optical Society of America A-optics Image Science and Vision | 2017
Shaobing Gao; Ming Zhang; Chao-Yi Li; Yongjie Li
It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color-biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color-biased images under CSS-2 without the need of burdensome acquiring of the training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that, by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.
chinese conference on pattern recognition | 2016
Xian-Shi Zhang; Shaobing Gao; Ruo-Xuan Li; Xin-Yu Du; Chao-Yi Li; Yongjie Li
In this paper, we propose a novel model for the computational color constancy, inspired by the amazing ability of the human vision system (HVS) to perceive the color of objects largely constant as the light source color changes. The proposed model imitates the color processing mechanisms in the specific level of the retina, the first stage of the HVS, from the adaptation emerging in the layers of cone photoreceptors and horizontal cells (HCs) to the color-opponent mechanism and disinhibition effect of the non-classical receptive field in the layer of retinal ganglion cells (RGCs). In particular, HC modulation provides a global color correction with cone-specific lateral gain control, and the following RGCs refine the processing with iterative adaptation until all the three opponent channels reach their stable states (i.e., obtain stable outputs). Instead of explicitly estimating the scene illuminant(s), such as most existing algorithms, our model directly removes the effect of scene illuminant. Evaluations on four commonly used color constancy data sets show that the proposed model produces competitive results in comparison with the state-of-the-art methods for the scenes under either single or multiple illuminants. The results indicate that single opponency, especially the disinhibitory effect emerging in the receptive fields subunit-structured surround of RGCs, plays an important role in removing scene illuminant(s) by inherently distinguishing the spatial structures of surfaces from extensive illuminant(s).
international conference on intelligent science and big data engineering | 2015
Shaobing Gao; Wangwang Han; Yanze Ren; Yongjie Li
High dynamic range image (HDR) is widely used since it is capable of capturing more fine information. However, problems remain in its display. A good rendering of HDR color images requires careful treatment of both the brightness and chromaticity information. In this work, we first prove that the global logarithmic mapping of the R, G, B channels may result in desaturation. We then propose an improved way for HDR image rendering. Specifically, by keeping the chromaticity fixated, we use a global transformation and the Retinex-based adaptive filter only in the brightness channel. We finally transfer them back to the RGB space after combining the new brightness and the original chromaticity together. Our model works well in keeping the chromaticity information. Global mapping only in the brightness channel is a good way to avoid desaturation. In addition, our model ensures a good independence between brightness and chromaticity. By applying our method on HDR images, the details in both dark areas and bright areas can be well displayed with better appearance in hue and saturation.
Collaboration
Dive into the Shaobing Gao's collaboration.
University of Electronic Science and Technology of China
View shared research outputs