Jean Philippe Tarel
University of Paris
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jean Philippe Tarel.
international conference on computer vision | 2009
Jean Philippe Tarel; Nicolas Hautiere
One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.
ieee intelligent vehicles symposium | 2010
Jean Philippe Tarel; Nicolas Hautiere; Aurélien Cord; Dominique Gruyer; Houssam Halmaoui
One source of accidents when driving a vehicle is the presence of homogeneous and heterogeneous fog. Fog fades the colors and reduces the contrast of the observed objects with respect to their distances. Various camera-based Advanced Driver Assistance Systems (ADAS) can be improved if efficient algorithms are designed for visibility enhancement of road images. The visibility enhancement algorithm proposed in [1] is not dedicated to road images and thus it leads to limited quality results on images of this kind. In this paper, we interpret the algorithm in [1] as the inference of the local atmospheric veil subject to two constraints. From this interpretation, we propose an extended algorithm which better handles road images by taking into account that a large part of the image can be assumed to be a planar road. The advantages of the proposed local algorithm are its speed, the possibility to handle both color images or gray-level images, and its small number of parameters. A comparative study and quantitative evaluation with other state-of-the-art algorithms is proposed on synthetic images with several types of generated fog. This evaluation demonstrates that the new algorithm produces similar quality results with homogeneous fog and that it is able to better deal with the presence of heterogeneous fog.
international conference on image processing | 2005
Sabri Boughorbel; Jean Philippe Tarel; Nozha Boujemaa
Histogram intersection (HI) kernel has been recently introduced for image recognition tasks. The HI kernel is proved to be positive definite and thus can be used in support vector machine (SVM) based recognition. Experimentally, it also leads to good recognition performances. However, its derivation applies only for binary strings such as color histograms computed on equally sized images. In this paper, we propose a new kernel, which we named generalized histogram intersection (GHI) kernel, since it applies in a much larger variety of contexts. First, an original derivation of the positive definiteness of the GHI kernel is proposed in the general case. As a consequence, vectors of real values can be used, and the images no longer need to have the same size. Second, a hyper-parameter is added, compared to the HI kernel, which allows us to better tune the kernel model to particular databases. We present experiments which prove that the GHI kernel outperforms the simple HI kernel in a simple recognition task. Comparisons with other well-known kernels are also provided.
IEEE Transactions on Intelligent Transportation Systems | 2010
Nicolas Hautiere; Jean Philippe Tarel; Didier Aubert
In adverse weather conditions, in particular, in daylight fog, the contrast of images grabbed by in-vehicle cameras in the visible light range is drastically degraded, which makes current driver assistance that relies on cameras very sensitive to weather conditions. An onboard vision system should take weather effects into account. The effects of daylight fog vary across the scene and are exponential with respect to the depth of scene points. Because it is not possible in this context to compute the road scene structure beforehand, contrary to fixed camera surveillance, a new scheme is proposed. Fog density is first estimated and then used to restore the contrast using a flat-world assumption on the segmented free space in front of a moving vehicle. A scene structure is estimated and used to refine the restoration process. Results are presented using sample road scenes under foggy weather and assessed by computing the visibility level enhancement that is gained by the method. Finally, we show applications to the enhancement in daylight fog of low-level algorithms that are used in advanced camera-based driver assistance.
international conference on multimedia and expo | 2005
Sabri Boughorbel; Jean Philippe Tarel; Nozha Boujemaa
Kernel based methods such as support vector machine (SVM) has provided successful tools for solving many recognition problems. One of the reasons of this success is the use of kernels. Positive definiteness has to be checked for kernels to be suitable for most of these methods. For instance for SVM, the use of a positive definite kernel insures that the optimized problem is convex and thus the obtained solution is unique. Alternative class of kernels called conditionally positive definite have been studied for a long time from the theoretical point of view and have drawn attention from the community only in the last decade. We propose a new kernel, named log kernel, which seems particularly interesting for images. Moreover, we prove that this new kernel is a conditionally positive definite kernel as well as the power kernel. Finally, we show from experimentations that using conditionally positive definite kernels allows us to outperform classical positive definite kernels
ieee intelligent vehicles symposium | 2009
Ludovic Simon; Jean Philippe Tarel; Roland Brémond
This paper proposes an improvement of Advanced Driver Assistance System based on saliency estimation of road signs. After a road sign detection stage, its saliency is estimated using a SVM learning. A model of visual saliency linking the size of an object and a size-independent saliency is proposed. An eye tracking experiment in context close to driving proves that this computational evaluation of the saliency fits well with human perception, and demonstrates the applicability of the proposed estimator for improved ADAS.
machine vision applications | 2014
Nicolas Hautiere; Jean Philippe Tarel; Houssam Halmaoui; Roland Brémond; Didier Aubert
Free-space detection is a primary task for car navigation. Unfortunately, classical approaches have difficulties in adverse weather conditions, in particular in daytime fog. In this paper, a solution is proposed thanks to a contrast restoration approach on images grabbed by an in-vehicle camera. The proposed method improves the state-of-the-art in several ways. First, the segmentation of the fog region of interest is better segmented thanks to the computation of the shortest routes maps. Second, the fog density as well as the position of the horizon line is jointly computed. Then, the method restores the contrast of the road by only assuming that the road is flat and, at the same time, detects the vertical objects. Finally, a segmentation of the connected component in front of the vehicle gives the free-space area. An experimental validation was carried out to foresee the effectiveness of the method. Different results are shown on sample images extracted from video sequences acquired from an in-vehicle camera. The proposed method is complementary to existing free-space area detection methods relying on color segmentation and stereovision.
IEEE Transactions on Instrumentation and Measurement | 2008
Nicolas Hautiere; Didier Aubert; Eric Dumont; Jean Philippe Tarel
In the framework of the French Research Action For Secure Driving (ARCOS) project, we developed a system that uses in-vehicle charge-coupled device (CCD) cameras, aiming to estimate the visibility distance in adverse weather conditions, particularly fog situations. The topic of this paper is the validation of the system. First, we present Koschmieders model of apparent luminance of objects observed against the background sky on the horizon and deal with the definitions of the different visibility distances we use in our measurement framework, as well as the links that bind them. Then, we describe the two specific onboard techniques we designed to estimate the visibility distance. In the third section, we present a dedicated site and how we use it to validate the previous techniques. Finally, we give the results of a quantitative validation of our onboard techniques, using actual pictures of the validation site in foggy weather.
european conference on computer vision | 2010
Roland Brémond; Josselin Petit; Jean Philippe Tarel
A number of computational models of visual attention have been proposed based on the concept of saliency map. Some of them have been validated as predictors of the visual scan-path of observers looking at images and videos, using oculometric data. They are widely used for Computer Graphics applications, mainly for image rendering, in order to avoid spending too much computing time on non salient areas, and in video coding, in order to keep a better image quality in salient areas. However, these algorithms were not used so far with High Dynamic Range (HDR) inputs. In this paper, we show that in the case of HDR images, the predictions using algorithms based on Itti, Koch and Niebur [1] are less accurate than with 8-bit images. To improve the saliency computation for HDR inputs, we propose a new algorithm derived from Itti and Koch [3]. From an eye tracking experiment with a HDR scene, we show that this algorithm leads to good results for the saliency map computation, with a better fit between the saliency map and the ocular fixation map than Itti, Koch and Nieburs algorithm. These results may impact image retargeting issues, for the display of HDR images on both LDR and HDR display devices.
Archive | 2003
Jean Lavenant; Jean Philippe Tarel; Didier Aubert
Collaboration
Dive into the Jean Philippe Tarel's collaboration.
Institut national de recherche sur les transports et leur sécurité
View shared research outputsInstitut national de recherche sur les transports et leur sécurité
View shared research outputsInstitut national de recherche sur les transports et leur sécurité
View shared research outputs