Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philipp Urban is active.

Publication


Featured researches published by Philipp Urban.


IEEE Transactions on Image Processing | 2012

Toward a Unified Color Space for Perception-Based Image Processing

Ingmar Lissner; Philipp Urban

Image processing methods that utilize characteristics of the human visual system require color spaces with certain properties to operate effectively. After analyzing different types of perception-based image processing problems, we present a list of properties that a unified color space should have. Due to contradictory perceptual phenomena and geometric issues, a color space cannot incorporate all these properties. We therefore identify the most important properties and focus on creating opponent color spaces without cross contamination between color attributes (i.e., lightness, chroma, and hue) and with maximum perceptual uniformity induced by color-difference formulas. Color lookup tables define simple transformations from an initial color space to the new spaces. We calculate such tables using multigrid optimization considering the Hung and Berns data of constant perceived hue and the CMC, CIE94, and CIEDE2000 color-difference formulas. The resulting color spaces exhibit low cross contamination between color attributes and are only slightly less perceptually uniform than spaces optimized exclusively for perceptual uniformity. We compare the CIEDE2000-based space with commonly used color spaces in two examples of perception-based image processing. In both cases, standard methods show improved results if the new space is used. All color-space transformations and examples are provided as MATLAB codes on our website.


IEEE Transactions on Image Processing | 2013

Image-Difference Prediction: From Grayscale to Color

Ingmar Lissner; Jens Preiss; Philipp Urban; Matthias Scheller Lichtenauer; Peter Zolliker

Existing image-difference measures show excellent accuracy in predicting distortions, such as lossy compression, noise, and blur. Their performance on certain other distortions could be improved; one example of this is gamut mapping. This is partly because they either do not interpret chromatic information correctly or they ignore it entirely. We present an image-difference framework that comprises image normalization, feature extraction, and feature combination. Based on this framework, we create image-difference measures by selecting specific implementations for each of the steps. Particular emphasis is placed on using color information to improve the assessment of gamut-mapped images. Our best image-difference measure shows significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures.


ACM Transactions on Graphics | 2015

Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials

Alan Brunton; Can Ates Arikan; Philipp Urban

Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this article, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.


IEEE Transactions on Image Processing | 2014

Color-Image Quality Assessment: From Prediction to Optimization

Jens Preiss; Felipe Fernandes; Philipp Urban

While image-difference metrics show good prediction performance on visual data, they often yield artifact-contaminated results if used as objective functions for optimizing complex image-processing tasks. We investigate in this regard the recently proposed color-image-difference (CID) metric particularly developed for predicting gamut-mapping distortions. We present an algorithm for optimizing gamut mapping employing the CID metric as the objective function. Resulting images contain various visual artifacts, which are addressed by multiple modifications yielding the improved color-image-difference (iCID) metric. The iCID-based optimizations are free from artifacts and retain contrast, structure, and color of the original image to a great extent. Furthermore, the prediction performance on visual data is improved by the modifications.


Journal of The Optical Society of America A-optics Image Science and Vision | 2007

Embedding non-Euclidean color spaces into Euclidean color spaces with minimal isometric disagreement

Philipp Urban; Mitchell R. Rosen; Roy S. Berns; Dierk Schleicher

Isometric embedding of non-Euclidean color spaces into Euclidean color spaces is investigated. Owing to regions of nonzero Gaussian curvature within common non-Euclidean color spaces, we focus on the determination of transformations into Euclidean spaces with minimal isometric disagreement. A computational method is presented for deriving such a color space transformation by means of a multigrid optimization, resulting in a simple color look-up table. The multigrid optimization is applied on the CIELAB space with the CMC, CIE94, and CIEDE2000 formulas. The mean disagreement between distances calculated by these formulas and Euclidean distances within the new spaces is far below 3% for all investigated color difference formulas. Color space transformations containing the inverse transformations are provided as MATLAB scripts at the first authors website.


IEEE Transactions on Image Processing | 2011

Paramer Mismatch-Based Spectral Gamut Mapping

Philipp Urban; Roy S. Berns

A spectral agreement between the original scene and a printed reproduction is required to achieve an illuminant-invariant visual match. This is usually impossible since the spectral gamut of typical printing systems is only a small subset of all natural reflectances. Out-of gamut reflectances need to be mapped into the spectral gamut of the printer minimizing the perceived error between original and reproduction for more than one illuminant. In this paper, we propose an algorithmic framework for spectral gamut mapping to achieve a reproduction that is as visually correct as a colorimetric reproduction for one illuminant and is superior for a set of other illuminants. A sequence of hierarchical mappings in 3-D color spaces are performed utilizing the observers color quantization to increase the spectral variability of subsequent transformations: For the most important illuminant a traditional colorimetric gamut mapping is performed. For any additional illuminants colors are mapped onto pixel-dependent paramer mismatch gamuts preserving the visual equivalence of previous transformations. We present a separation method for investigating the spectral gamut mapping framework and show that hue shifts and chroma gains cannot be always avoided for the second and subsequent illuminants and that the order of illuminants has a large impact on the final reproduction.


Signal, Image and Video Processing | 2009

Metamer Density Estimated Color Correction

Philipp Urban; Rolf-Rainer Grigat

Color correction is the transformation of response values of scanners or digital cameras into a device- independent color space. In general, the transformation is not unique due to different acquisition and viewing illuminants and non-satisfaction of the Luther–Ives condition by a majority of devices. In this paper we propose a method that approximates the optimal color correction in the sense of a minimal mean error. The method is based on a representative set of reflectance spectra that is used to calculate a special basic collection of device metameric blacks and an appropriate fundamental metamer for each sensor response. Combining the fundamental metamer and the basic collection results in a set of reflectances that follows the density distribution of metameric reflectances if calculated by Bayesian inference. Transforming only positive and bounded spectra of the set into an observer’s perceptually uniform color space results in a point cloud that follows the density distribution of device metamers within the metamer mismatch space of acqcuisition system and human observer. The mean value of this set is selected for color correction, since this is the point with a minimal mean color distance to all other points in the cloud. We present the results of various simulation experiments considering different acquisition and viewing illuminants, sensor types, noise levels, and existing methods for comparison.


Journal of The Optical Society of America A-optics Image Science and Vision | 2009

Spectral image reconstruction using an edge preserving spatio-spectral Wiener estimation

Philipp Urban; Mitchell R. Rosen; Roy S. Berns

Reconstruction of spectral images from camera responses is investigated using an edge preserving spatio-spectral Wiener estimation. A Wiener denoising filter and a spectral reconstruction Wiener filter are combined into a single spatio-spectral filter using local propagation of the noise covariance matrix. To preserve edges the local mean and covariance matrix of camera responses is estimated by bilateral weighting of neighboring pixels. We derive the edge-preserving spatio-spectral Wiener estimation by means of Bayesian inference and show that it fades into the standard Wiener reflectance estimation shifted by a constant reflectance in case of vanishing noise. Simulation experiments conducted on a six-channel camera system and on multispectral test images show the performance of the filter, especially for edge regions. A test implementation of the method is provided as a MATLAB script at the first authors website.


IEEE Transactions on Image Processing | 2014

Image-Difference Prediction: From Color to Spectral

Steven Le Moan; Philipp Urban

We propose a new strategy to evaluate the quality of multi and hyperspectral images, from the perspective of human perception. We define the spectral image difference as the overall perceived difference between two spectral images under a set of specified viewing conditions (illuminants). First, we analyze the stability of seven image-difference features across illuminants, by means of an information-theoretic strategy. We demonstrate, in particular, that in the case of common spectral distortions (spectral gamut mapping, spectral compression, spectral reconstruction), chromatic features vary much more than achromatic ones despite considering chromatic adaptation. Then, we propose two computationally efficient spectral image difference metrics and compare them to the results of a subjective visual experiment. A significant improvement is shown over existing metrics such as the widely used root-mean square error.


Journal of The Optical Society of America A-optics Image Science and Vision | 2010

Upgrading color-difference formulas

Ingmar Lissner; Philipp Urban

We propose a method to improve the prediction performance of existing color-difference formulas with additional visual data. The formula is treated as the mean function of a Gaussian process, which is trained with experimentally determined color-discrimination data. Color-difference predictions are calculated using Gaussian process regression (GPR) considering the uncertainty of the visual data. The prediction accuracy of the CIE94 formula is significantly improved with the GPR approach for the Leeds and the Witt datasets. By upgrading CIE94 with GPR we achieve a significantly lower STRESS value of 26.58 compared with that for CIEDE2000 (27.49) on a combined dataset. The method could serve to improve the prediction performance of existing color-difference equations around particular color centers without changing the equations themselves.

Collaboration


Dive into the Philipp Urban's collaboration.

Top Co-Authors

Avatar

Edgar Dörsam

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Ingmar Lissner

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Jens Preiss

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rolf-Rainer Grigat

Hamburg University of Technology

View shared research outputs
Top Co-Authors

Avatar

Roy S. Berns

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mitchell R. Rosen

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kathrin Happel

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Maria Fedutina

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge