Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clément Fredembach is active.

Publication


Featured researches published by Clément Fredembach.


international conference on image processing | 2009

Color image dehazing using the near-infrared

Lex Schaul; Clément Fredembach; Sabine Süsstrunk

In landscape photography, distant objects often appear blurred with a blue color cast, a degradation caused by atmospheric haze. To enhance image contrast, pleasantness and information content, dehazing can be performed.


international conference on image processing | 2009

Designing color filter arrays for the joint capture of visible and near-infrared images

Yue M. Lu; Clément Fredembach; Martin Vetterli; Sabine Süsstrunk

Digital camera sensors are inherently sensitive to the near-infrared (NIR) part of the light spectrum. In this paper, we propose a general design for color filter arrays that allow the joint capture of visible/NIR images using a single sensor. We pose the CFA design as a novel spatial domain optimization problem, and provide an efficient iterative procedure that finds (locally) optimal solutions. Numerical experiments confirm the effectiveness of the proposed CFA design, which can simultaneously capture high quality visible and NIR image pairs.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Eigenregions for image classification

Clément Fredembach; Michael Schröder; Sabine Süsstrunk

For certain databases and classification tasks, analyzing images based on region features instead of image features results in more accurate classifications. We introduce eigenregions, which are geometrical features that encompass area, location, and shape properties of an image region, even if the region is spatially incoherent. Eigenregions are calculated using principal component analysis (PCA). On a database of 77,000 different regions obtained through the segmentation of 13,500 real-scene photographic images taken by nonprofessional, eigenregions improved the detection of localized image classes by a noticeable amount. Additionally, eigenregions allow us to prove that the largest variance in natural image region geometry is due to its area and not to shape or position.


international conference on computer vision | 2007

Detecting Illumination in Images

Graham D. Finlayson; Clément Fredembach; Mark S. Drew

In this paper we present a surprisingly simple yet powerful method for detecting illumination determining which pixels are lit by different lights in images. Our method is based on the chromagenic camera, which takes two pictures of each scene: one is captured as normal and the other through a coloured filter. Previous research has shown that the relationship between the colours, the RGBs, in the filtered and unfiltered images depends strongly on the colour of the light and this can be used to estimate the colour of the illuminant. While chromagenic illuminant estimation often works well it can and does fail and so is not itself a direct solution to the illuminant detection problem. In this paper we dispense with the goal of illumination estimation and seek only to use the chromagenic effect to find out which parts of a scene are illuminated by the same lights. The simplest implementation of our idea involves a combinatorial search. We precompute a dictionary of possible illuminant relations that might map RGBs to filtered counterparts from which we select a small number m corresponding to the number of distinct lights we think might be present. Each pixel, or region, is assigned the relation from this m-set that best maps filtered to unfiltered RGB. All m-sets are tried in turn and the one that has the minimum prediction error over all is found. At the end of this search process each pixel or region is assigned an integer between 1 and m indicating which of the m lights are thought to have illuminated the region. Our simple search algorithm is possible when m = 2 (and m = 3) and for this case we present experiments that show our method does a remarkable job in detecting illumination in images: if the 2 lights are shadow and non- shadow, we find the shadows almost effortlessly. Compared to ground truth data, our method delivers close to optimal performance.


british machine vision conference | 2005

Hamiltonian path based shadow removal

Clément Fredembach; Graham D. Finlayson

For some computer vision tasks, the presence of shadows in images can cause problems. For example, object tracks can be lost as an object crosses over a shadow boundary. Recently, it has been shown that it is possible to remove shadows from images. Assuming that the location of the shadows are known, shadow-free images are obtained in three steps. First, the image is differentiated. Second, the derivatives at the shadow edge are set to zero. Third, reintegration delivers an image without shadows. While this process can work well, the resultant shadow free image often has artifacts and, moreover, the reintegration is an expensive computational procedure. In this paper we propose a method which can produce shadow free images quickly and without artifacts. Our algorithm is based on two observations. First, that shadows in images are closed regions and if they are not closed artifacts can result during reintegration. Thus we propose to extend the existing methods and enforce the constraint that shadow boundaries must be closed prior to reintegration. Second, that the standard reintegration method used (solving a 2D Poisson equation) also, necessarily, introduces artifacts. The solution here is to reintegrate shadow and non shadow regions almost separately. Specifically, we reintegrate the image along a Hamiltonian path that enters and exists the shadow regions once. Detail that was masked out at the shadow boundary is then infilled in a second step. The resulting reintegrated image has much fewer artifacts. Moreover, since the reintegration method is path based it is both simple and fast. Experiments validate our approach.


international conference on pattern recognition | 2006

Simple Shadow Remova

Clément Fredembach; Graham D. Finlayson

Given the location of shadows, how can we obtain high-quality shadow-free images? Several methods have been proposed so far, but they either introduce artifacts or can be difficult to implement. We propose here a simple method that results in virtually error and shadow-free images in a very short time. Our approach is based on the insight that shadow regions differ from their shadow-free counterparts by a single scaling factor. We derive a robust method to obtain that factor. We show that for complex scenes - containing many disjointed shadow regions - our new method is faster and more robust than others previously published. The method delivers good performance on a variety of outdoor images


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Automatic and Accurate Shadow Detection Using Near-Infrared Information

Dominic Rüfenacht; Clément Fredembach; Sabine Süsstrunk

We present a method to automatically detect shadows in a fast and accurate manner by taking advantage of the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. Dark objects, which confound many shadow detection algorithms, often have much higher reflectance in the NIR. We can thus build an accurate shadow candidate map based on image pixels that are dark both in the visible and NIR representations. We further refine the shadow map by incorporating ratios of the visible to the NIR image, based on the observation that commonly encountered light sources have very distinct spectra in the NIR band. The results are validated on a new database, which contains visible/NIR images for a large variety of real-world shadow creating illuminant conditions, as well as manually labeled shadow ground truth. Both quantitative and qualitative evaluations show that our method outperforms current state-of-the-art shadow detection algorithms in terms of accuracy and computational efficiency.


mediterranean electrotechnical conference | 2010

GUI-aided NIR and color image blending

Andrea Guidi; Radhakrishna Achanta; Clément Fredembach; Sabine Süsstrunk

A near-infrared (NIR) image of a scene contains different information than the corresponding visible spectrum (RGB) image, due to physical phenomena related to scattering, absorption, and reflectance of the different radiation bands. It can be desirable to combine the most relevant image content from both NIR and RGB to improve image quality. In this paper, we present two schemes of combining NIR and RGB information to obtain visually better quality images. The first is an automatic approach that performs segment-based NIR-blending by local and global entropy maximization. The second uses a GUI-aided approach where the user chooses the segments and manually controls blending. To minimize artifacts at the segment boundaries, three different boundary smoothing methods are introduced and compared in terms of speed and quality of output.


Proceedings of SPIE | 2012

Measuring saliency in images: which experimental parameters for the assessment of image quality?

Clément Fredembach; Geoff Woolfe; Jue Wang

Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a set of parameters, tasks and images that can be used to compare the various saliency prediction methods in a manner that is meaningful for image quality assessment.


color imaging conference | 2008

Colouring the near infrared

Clément Fredembach; Sabine Süsstrunk

Collaboration


Dive into the Clément Fredembach's collaboration.

Top Co-Authors

Avatar

Sabine Süsstrunk

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Tamburrino

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Lex Schaul

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Neda Salamati

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dominic Rüfenacht

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Andrea Guidi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Martin Vetterli

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge