Arjan Gijsenij
John Wiley & Sons
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arjan Gijsenij.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012
Arjan Gijsenij; Theo Gevers; J. van de Weijer
Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images, such as material, shadow, and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g., material, shadow-geometry, and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation, it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Gray-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Gray-Edge algorithm based on highlights reduces the median angular error with approximately 25 percent. In an uncontrolled environment, improvements in angular error up to 11 percent are obtained with respect to regular edge-based color constancy.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017
Graham D. Finlayson; Roshanak Zakizadeh; Arjan Gijsenij
The angle between the RGBs of the measured illuminant and estimated illuminant colors—the recovery angular error—has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are ‘divided out’ from the image to, hopefully, provide image colors that are not confounded by the color of the light. However, even though the same reproduction results the same scene might have a large range of recovery errors. In this work the scale of the problem with the recovery error is quantified. Next we propose a new metric for evaluating illuminant estimation algorithms, called the reproduction angular error, which is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are ‘divided out’. Our new metric ties algorithm performance to how the illuminant estimates are used. For a given algorithm, adopting the new reproduction angular error leads to different optimal parameters. Further the ranked list of best to worst algorithms changes when the reproduction angular is used. The importance of using an appropriate performance metric is established.
IEEE Transactions on Image Processing | 2014
Noha M. Elfiky; Theo Gevers; Arjan Gijsenij; Jordi Gonzàlez
The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g., gray-world and white patch assumption). In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions (depth/layer) found in images. The aim is to classify images into stages (rough 3D geometry models). According to stage models, images are divided into stage regions using hard and soft segmentation. After that, the best color constancy methods are selected for each geometry depth. To this end, we propose a method to combine color constancy algorithms by investigating the relation between depth, local image statistics, and color constancy. Image statistics are then exploited per depth to select the proper color constancy method. Our approach opens the possibility to estimate multiple illuminations by distinguishing nearby light source from distant illuminations. Experiments on state-of-the-art data sets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 50% of median angular error. When using a perfect classifier (i.e, all of the test images are correctly classified into stages); the performance of the proposed method achieves an improvement of 52% of the median angular error compared with the best-performing single color constancy algorithm.
Journal of The Optical Society of America A-optics Image Science and Vision | 2013
Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij; Niels Dekker
We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scenes chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.
Archive | 2012
Andrew D. Bagdanov; Theo Gevers; Arjan Gijsenij; Joost van de Weijer; Jan-Mark Geusebroek
international conference on computer vision | 2007
Arjan Gijsenij; Th. Gevers; J. van de Weijer
Archive | 2017
Graham D. Finlayson; Ghalia Hemrit; Arjan Gijsenij; Peter V. Gehler
Archive | 2012
Cees G. M. Snoek; Theo Gevers; Arjan Gijsenij; Joost van de Weijer; Jan-Mark Geusebroek
Archive | 2012
Theo Gevers; Arjan Gijsenij; Joost van de Weijer; Jan-Mark Geusebroek
international conference on computer graphics imaging and visualisation | 2008
Arjan Gijsenij; Theo Gevers; J. van de Weijer