David Hasler
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Hasler.
human vision and electronic imaging conference | 2003
David Hasler; Sabine Süsstrunk
We want to integrate colourfulness in an image quality evaluation framework. This quality framework is meant to evaluate the perceptual impact of a compression algorithm or an error prone communication channel on the quality of an image. The image might go through various enhancement or compression algorithms, resulting in a different -- but not necessarily worse -- image. In other words, we will measure quality but not fidelity to the original picture. While modern colour appearance models are able to predict the perception of colourfulness of simple patches on uniform backgrounds, there is no agreement on how to measure the overall colourfulness of a picture of a natural scene. We try to quantify the colourfulness in natural images to perceptually qualify the effect that processing or coding has on colour. We set up a psychophysical category scaling experiment, and ask people to rate images using 7 categories of colourfulness. We then fit a metric to the results, and obtain a correlation of over 90% with the experimental data. The metric is meant to be used real time on video streams. We ignored any issues related to hue in this paper.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003
David Hasler; Luciano Sbaiz; Sabine Süsstrunk; Martin Vetterli
We address the question of how to characterize the outliers that may appear when matching two views of the same scene. The match is performed by comparing the difference of the two views at a pixel level aiming at a better registration of the images. When using digital photographs as input, we notice that an outlier is often a region that has been occluded, an object that suddenly appears in one of the images, or a region that undergoes an unexpected motion. By assuming that the error in pixel intensity generated by the outlier is similar to an error generated by comparing two random regions in the scene, we can build a model for the outliers based on the content of the two views. We illustrate our model by solving a pose estimation problem: the goal is to compute the camera motion between two views. The matching is expressed as a mixture of inliers versus outliers, and defines a function to minimize for improving the pose estimation. Our model has two benefits: First, it delivers a probability for each pixel to belong to the outliers. Second, our tests show that the method is substantially more robust than traditional robust estimators (M-estimators) used in image stitching applications, with only a slightly higher computational complexity.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012
Karim Ali; François Fleuret; David Hasler; Pascal Fua
We propose a new learning strategy for object detection. The proposed scheme forgoes the need to train a collection of detectors dedicated to homogeneous families of poses, and instead learns a single classifier that has the inherent ability to deform based on the signal of interest. We train a detector with a standard AdaBoost procedure by using combinations of pose-indexed features and pose estimators. This allows the learning process to select and combine various estimates of the pose with features able to compensate for variations in pose without the need to label data for training or explore the pose space in testing. We validate our framework on three types of data: hand video sequences, aerial images of cars, and face images. We compare our method to a standard boosting framework, with access to the same ground truth, and show a reduction in the false alarm rate of up to an order of magnitude. Where possible, we compare our method to the state of the art, which requires pose annotations of the training data, and demonstrate comparable performance.
Journal of Visual Communication and Image Representation | 2004
David Hasler; Sabine Süsstrunk
Digitally, panoramic pictures can be assembled from several individual, overlapping photographs. While the geometric alignment of these photographs has retained a lot of attention from the computer vision community, the mapping of colour, i.e. the correction of colour mismatches, has not been studied extensively. In this article, we analyze the colour rendering of today’s digital photographic systems, and propose a method to correct for colour differences. The colour correction consists in retrieving linearized relative scene referred data from uncalibrated images by estimating the Opto-Electronic Conversion Function (OECF) and correcting for exposure, white-point, and vignetting variations between the individual pictures. Different OECF estimation methods are presented and evaluated in conjunction with motion estimation. The resulting panoramas, shown on examples using slides and digital photographs, yield much-improved visual quality compared to stitching using only motion estimation. Additionally, we show that colour correction can also improve the geometrical alignment.
international conference on computer vision | 2009
Karim Ali; François Fleuret; David Hasler; Pascal Fua
A new learning strategy for object detection is presented. The proposed scheme forgoes the need to train a collection of detectors dedicated to homogeneous families of poses, and instead learns a single classifier that has the inherent ability to deform based on the signal of interest. Specifically, we train a detector with a standard AdaBoost procedure by using combinations of pose-indexed features and pose estimators instead of the usual image features. This allows the learning process to select and combine various estimates of the pose with features able to implicitly compensate for variations in pose. We demonstrate that a detector built in such a manner provides noticeable gains on two hand video sequences and analyze the performance of our detector as these data sets are synthetically enriched in pose while not increased in size.
Proceedings of SPIE | 2000
David Hasler; Sabine Süsstrunk
Nowadays, the ability to create panoramic photographs is included with most of the commercial digital cameras. The principle is to shoot several pictures and stitch them together to build a panorama. To ensure the quality of the final image, the different pictures have to be perfectly aligned and the colors of the images should match. While the alignment of images has received a lot of attention from the computer vision community, the mismatch in colors was often ignored and handled using smooth transitions from one picture to the next to mask the mismatch. This paper presents a method to simultaneously estimate the alignment of the pictures and the color transformation between them. By estimating the color transformation from the scene to the pixels, the method is able to remove the mismatch in colors of the different images, and thus leads to better image quality.
international conference on image processing | 1999
David Hasler; Luciano Sbaiz; Serge Ayer; Martin Vetterli
This paper addresses a key issue in the problem of reconstructing a panoramic view from several pictures taken with a hand-held camera, namely the estimation of some ill-posed parameters using an external constraint. For many practical reasons, a panoramic reconstruction has to be performed in several independent steps, resulting in a set of different measurements of the same reality. For example, the focal length can be estimated with each pair of overlapping images. The idea is to introduce some a priori knowledge about the world by means of a constraint on the parameter set. In the former example, the constraint would impose equality on all the focal length estimates. This paper describes the appropriate correction that needs to be applied to the parameters in order to obtain a coherent result. It also suggests a way to evaluate if a constraint is plausible given a set of initial estimates. The basic idea behind the method is to modify the parameters without significantly changing the overlapping part of the images. The method is evaluated using two different experimental setups. The first aims at improving the quality of a full panoramic image. The second measures independently the positions in space of two planes using two pictures. The latter experiment shows that the two computed motions can be considered as a single one with two different planes in space.
human vision and electronic imaging conference | 2003
Anna Liberg; David Hasler
A combined achromatic and chromatic contrast metric for digital images and video is presented in this paper. Our work is aimed at tuning any parametric rendering algorithm in an automated way by computing how much details an observer perceives in a rendered scene. The contrast metric is based on contrast analysis in spatial domain of image sub-bands constructed by pyramidal decomposition of the image. The proposed contrast metric is the sum of the perceptual contrast of every pixel in the image at different detail levels corresponding to different viewing distances. The novel metric shows high correlation with subjective experiments. Important applications involve optimal parameter set of any image rendering and contrast enhancement technique or auto exposure of an image capturing device.
Proceedings of SPIE, the International Society for Optical Engineering | 2001
David Hasler; Sabine Süsstrunk
Proc. International Conference on Imaging Systems (ICIS) | 2002
David Hasler; Sabine Süsstrunk