Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcel P. Lucassen is active.

Publication


Featured researches published by Marcel P. Lucassen.


Vision Research | 1996

Color constancy under natural and artificial illumination

Marcel P. Lucassen; Jan Walraven

Color constancy was studied under conditions simulating either natural or extremely artificial illumination. Four test illuminants were used: two broadband phases of daylight (correlated color temperatures 4000 and 25,000 K) and two spectrally impoverished metamers of these lights, each consisting of only two wavelengths. A computer controlled color monitor was used for reproducing the chromaticities and luminances of an array of Munsell color samples rendered under these illuminants. An asymmetric haploscopic matching paradigm was used in which the same stimulus pattern, either illuminated by one of the test illuminants, or by a standard broadband daylight (D65), was alternately presented to the left and right eye. Subjects adjusted the RGB settings of the samples seen under D65 (match condition), to match the appearance of the color samples seen under the test illuminant. The results show the expected failure of color constancy under two-wavelengths illumination, and approximate color constancy under natural illumination. Quantitative predictions of the results were made on the basis of two different models, a computational model for recovering surface reflectance, and a model that assumes the color response to be determined by cone-specific contrast and absolute level of stimulation (Lucassen & Walraven, 1993). The latter model was found to provide somewhat more accurate predictions, under all illuminant conditions.


european conference on computer vision | 2008

A Perceptual Comparison of Distance Measures for Color Constancy Algorithms

Arjan Gijsenij; Theo Gevers; Marcel P. Lucassen

Color constancy is the ability to measure image features independent of the color of the scene illuminant and is an important topic in color and computer vision. As many color constancy algorithms exist, different distance measures are used to compute their accuracy. In general, these distances measures are based on mathematical principles such as the angular error and Euclidean distance. However, it is unknown to what extent these distance measures correlate to human vision. Therefore, in this paper, a taxonomy of different distance measures for color constancy algorithms is presented. The main goal is to analyze the correlation between the observed quality of the output images and the different distance measures for illuminant estimates. The output images are the resulting color corrected images using the illuminant estimates of the color constancy algorithms, and the quality of these images is determined by human observers. Distance measures are analyzed how they mimic differences in color naturalness of images as obtained by humans. Based on the theoretical and experimental results on spectral and real-world data sets, it can be concluded that the perceptual Euclidean distance (PED) with weight-coefficients (w R = 0.26, w G = 0.70, w B = 0.04) finds its roots in human vision and correlates significantly higher than all other distance measures including the angular error and Euclidean distance.


british machine vision conference | 2015

Color Constancy by Deep Learning

Zhongyu Lou; Theo Gevers; Ninghang Hu; Marcel P. Lucassen

Computational color constancy aims to estimate the color of the light source. The performance of many vision tasks, such as object detection and scene understanding, may benefit from color constancy by estimating the correct object colors. Since traditional color constancy methods are based on specific assumptions, none of those methods can be used as a universal predictor. Further, shallow learning schemes are used for training-based color constancy approaches, suffering from limited learning capacity. In this paper, we propose a framework using Deep Neural Networks (DNNs) to obtain an accurate light source estimator to achieve color constancy. We formulate color constancy as a DNN-based regression approach to estimate the color of the light source. The model is trained using datasets of more than a million images. Experiments show that the proposed algorithm outperforms the state-of-the-art by 9\%. Especially in cross dataset validation, reducing the median angular error by 35\%. Further, in our implementation, the algorithm operates at more than


visual information processing conference | 2003

A universal color image quality metric

Alexander Toet; Marcel P. Lucassen

100


Journal of The Optical Society of America A-optics Image Science and Vision | 2015

How psychophysical methods influence optimizations of color difference formulas

Eric Kirchner; Niels Dekker; Marcel P. Lucassen; Lan Njo; Ivo van der Lans; Philipp Urban; Rafael Huertas

fps during


Journal of The Optical Society of America A-optics Image Science and Vision | 2013

Effects of chromatic image statistics on illumination induced color differences

Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij; Niels Dekker

We extend a recently introduced universal grayscale image quality index to the perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality metric through observer experiments in which subjects ranked images according to perceived distortion. The metric correlates strongly with human perception and can therefore be used to assess the performance of color image coding and compression schemes, color image enhancement algorithms, synthetic color image generators, and color image fusion schemes.


Proceedings of SPIE | 2014

Mathematical limitations when choosing psychophysical methods: geometric versus linear grey scales

Niels Dekker; Marcel P. Lucassen; Eric Kirchner; Philipp Urban; Rafael Huertas

For developing color difference formulas, there are several choices to be made on the psychophysical method used for gathering visual (observer) data. We tested three different psychophysical methods: gray scales, constant stimuli, and two-alternative forced choice (2AFC). Our results show that when using gray scales or constant stimuli, assessments of color differences are biased toward lightness differences. This bias is particularly strong in LCD monitor experiments, and also present when using physical paint samples. No such bias is found when using 2AFC. In that case, however, observer responses are affected by other factors that are not accounted for by current color difference formulas. For accurate prediction of relative color differences, our results show, in agreement with other works, that modern color difference formulas do not perform well. We also investigated if the use of digital images as presented on LCD displays is a good alternative to using physical samples. Our results indicate that there are systematic differences between these two media.


Holst, G.C., Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVI, 35-41 | 2005

Identification in static luminance and color noise

Piet Bijl; Marcel P. Lucassen; Jolanda Roelofsen

We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scenes chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.


electronic imaging | 2015

Goniochromatic difference between effect coatings: Is the whole more than the sum of its parts?

Jana Blahová; Eric Kirchner; Niels Dekker; Marcel P. Lucassen; Lan Njo; Ivo van der Lans; Philipp Urban; Rafael Huertas

The grey scale method is commonly used for investigating differences in material appearance. Specifically, for testing color difference equations, perceived color differences between sample pairs are obtained by visually comparing to differences in a series of achromatic sample pairs. Two types of grey scales are known: linear and geometric. Their instrumental color differences vary linearly or geometrically (i.e., exponentially), respectively. Geometric grey scales are used in ISO standards and standard procedures of the textile industries. We compared both types of grey scale in a psychophysical study. Color patches were shown on a color-calibrated display. Ten observers assessed color differences in sample pairs, with color differences between ΔEab = 0.13 and 2.50. Assessments were scored by comparison to either a linear or a geometric grey scale, both consisting of six achromatic pairs. For the linear scale we used color differences ΔEab = 0.0, 0.6, 1.2,..., 3.0. For the geometric scale this was ΔEab=0.0, 0.4, 0.8, 1.6, 3.2, 6.4. Our results show that for the geometric scale, data from visual scores clutter at the low end of the scale and do not match the ΔEab range of the grey scale pairs. We explain why this happens, and why this is mathematically inevitable when studying small color differences with geometric grey scales. Our analysis explains why previous studies showed larger observer variability for geometric than for linear scales.


Journal of The Optical Society of America A-optics Image Science and Vision | 2015

How color difference formulas depend on reference pairs in the underlying constant stimuli experiment.

Eric Kirchner; Niels Dekker; Marcel P. Lucassen; Lan Njo; Ivo van der Lans; Pim Koeckhoven; Philipp Urban; Rafael Huertas

If images from multiple sources (e.g. from the different bands of a multi-band sensor) are displayed in color, Signal and Noise may appear as luminance and color differences in the image. As a consequence, the perception of color differences may be important for Target Acquisition performance with fused imagery. Luminance and color can be represented in a 3-D space; in the CIE 1994 color difference model, the three perceptual directions are lightness (L*), chroma (C*) and hue (h*). In this 3-D color space, we performed two perception experiments. In Experiment 1, we measured human observer detection thresholds (JNDs) for uniformly distributed static noise (fixed pattern noise) in L*, C* or h* on a uniform background. The results show that the JND for noise in L* is significantly lower than for noise in C* or h*. In Experiment 2, we measured the threshold contrast for identification (orientation discrimination) of a Ushaped test target on a noisy background. With test symbol and background noise in L*, the ratio between signal threshold and noise level is constant. With the symbol in a different direction, we found little dependency on noise level. The results may be used to optimize the use of color to human detection and identification performance with multi-band systems.

Collaboration


Dive into the Marcel P. Lucassen's collaboration.

Top Co-Authors

Avatar

Theo Gevers

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Walraven

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge