Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neus Sabater is active.

Publication


Featured researches published by Neus Sabater.


computer vision and pattern recognition | 2016

Split and Match: Example-Based Adaptive Patch Sampling for Unsupervised Style Transfer

Oriel Frigo; Neus Sabater; Julie Delon; Pierre Hellier

This paper presents a novel unsupervised method to transfer the style of an example image to a source image. The complex notion of image style is here considered as a local texture transfer, eventually coupled with a global color transfer. For the local texture transfer, we propose a new method based on an adaptive patch partition that captures the style of the example image and preserves the structure of the source image. More precisely, this example-based partition predicts how well a source patch matches an example patch. Results on various images show that our method outperforms the most recent techniques.


european conference on computer vision | 2014

Accurate Disparity Estimation for Plenoptic Images

Neus Sabater; Mozhdeh Seifi; Valter Drazic; Gustavo Sandri; Patrick Pérez

In this paper we propose a post-processing pipeline to recover accurately the views (light-field) from the raw data of a plenoptic camera such as Lytro and to estimate disparity maps in a novel way from such a light-field. First, the microlens centers are estimated and then the raw image is demultiplexed without demosaicking it beforehand. Then, we present a new block-matching algorithm to estimate disparities for the mosaicked plenoptic views. Our algorithm exploits at best the configuration given by the plenoptic camera: (i) the views are horizontally and vertically rectified and have the same baseline, and therefore (ii) at each point, the vertical and horizontal disparities are the same. Our strategy of demultiplexing without demosaicking avoids image artifacts due to view cross-talk and helps estimating more accurate disparity maps. Finally, we compare our results with state-of-the-art methods.


asian conference on computer vision | 2014

Optimal Transportation for Example-Guided Color Transfer

Oriel Frigo; Neus Sabater; Vincent Demoulin; Pierre Hellier

In this work, a novel and generic method for example-based color transfer is presented. The color transfer is formulated in two steps: first, an example-based Chromatic Adaptation Transform (CAT) has been designed to obtain an illuminant matching between input and example images. Second, the dominant colors of the input and example images are optimally mapped. The main strength of the method comes from using optimal transportation to map a pair of meaningful color palettes, and regularizing this mapping through thin plate splines. In addition, we show that additional visual or semantic constraints can be seamlessly incorporated to obtain a consistent color mapping. Experiments have shown that the proposed method outperforms state-of-the-art techniques for challenging images. In particular, color mapping artifacts have been objectively assessed by the Structural Similarity (SSIM) measure [26], showing that the proposed approach preserves structures while transferring color. Finally, results on video color transfer show the effectiveness of the method.


image and vision computing new zealand | 2012

A precise real-time stereo algorithm

Valter Drazic; Neus Sabater

Speed and accuracy in stereo vision are of foremost importance in many applications. Very few authors focus on both aspects and in this paper we propose a new local algorithm that achieves high-accuracy in real-time. Our GPU implementation of a disparity estimator levels with the fastest published so far with 4839 MDE/s (Millions of Disparity Estimation per Second) but is the most precise at that speed. Moreover, our algorithm achieves high-accuracy thanks to a reliability criterion in the matching decision rule. A quantitative and qualitative evaluation has been done on the Middlebury benchmark and on real data showing the superiority of our algorithm in terms of execution speed and matching accuracy. Accomplishing such performances in terms of speed and quality opens the way for new applications in 3D media and entertainment services.


international conference on image processing | 2015

Motion driven tonal stabilization

Oriel Frigo; Neus Sabater; Julie Delon; Pierre Hellier

This paper addresses the problem of tonal fluctuation in videos. Due to the automatic settings of consumer cameras, the colors of objects in image sequences might change over time. We propose here a fast and computationally light method to stabilize this tonal appearance, while remaining robust to motion and occlusions. To do so, a minimally viable color correction model is used, in conjunction with an effective estimation of dominant motion. The final solution is a temporally weighted correction, explicitly driven by the motion magnitude, both visually efficient and very fast, with potential to real time processing. Experimental results obtained on a variety of sequences outperform the current state of the art in terms of tonal stability, at a much reduced computational complexity.


Proceedings of SPIE | 2014

Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

Guillaume Boisson; Paul Kerbiriou; Valter Drazic; Olivier Bureller; Neus Sabater; Arno Schubert

Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.


IEEE Transactions on Computational Imaging | 2017

An Image Rendering Pipeline for Focused Plenoptic Cameras

Matthieu Hog; Neus Sabater; Benoit Vandame; Valter Drazic

In this paper, we present a complete processing pipeline for focused plenoptic cameras. In particular, we propose 1) a new algorithm for microlens center calibration fully in the Fourier domain, 2) a novel algorithm for depth map computation using a stereo focal stack, and 3) a depth-based rendering algorithm that is able to refocus at a particular depth or to create all-in-focus images. The proposed algorithms are fast, accurate, and do not need to generate subaperture images or epipolar plane images which is capital for focused plenoptic cameras. Also, the resolution of the resulting depth map is the same as the rendered image. We show results of our pipeline on Georgievs dataset and real images captured with different Raytrix cameras.


european conference on computer vision | 2016

Light Field Segmentation Using a Ray-Based Graph Structure

Matthieu Hog; Neus Sabater; Christine Guillemot

In this paper, we introduce a novel graph representation for interactive light field segmentation using Markov Random Field (MRF). The greatest barrier to the adoption of MRF for light field processing is the large volume of input data. The proposed graph structure exploits the redundancy in the ray space in order to reduce the graph size, decreasing the running time of MRF-based optimisation tasks. Concepts of free rays and ray bundles with corresponding neighbourhood relationships are defined to construct the simplified graph-based light field representation. We then propose a light field interactive segmentation algorithm using graph-cuts based on such ray space graph structure, that guarantees the segmentation consistency across all views. Our experiments with several datasets show results that are very close to the ground truth, competing with state of the art light field segmentation methods in terms of accuracy and with a significantly lower complexity. They also show that our method performs well on both densely and sparsely sampled light fields.


international conference on image processing | 2014

Disparity-guided demosaicking of light field images

Mozhdeh Seifi; Neus Sabater; Valter Drazic; Patrick Pérez

Light-field imaging has been recently introduced to mass market by the hand held plenoptic camera Lytro. Thanks to a microlens array placed between the main lens and the sensor, the captured data contains different views of the scene from different view points. This offers several post-capture applications, e.g., computationally changing the main lens focus. The raw data conversion in such cameras is however barely studied in the literature. The goal of this paper is to study the particularly overlooked problem of demosaicking the views for plenoptic cameras such as Lytro. We exploit the redundant sampling of scene content in the views, and show that disparities estimated from the mosaicked data can guide the demosaicking, resulting in minimum artifacts compared to the state of art methods. Besides, by properly addressing the view demultiplexing step, we take the first step towards light field super-resolution with negligible computational overload.


The Visual Computer | 2018

Video Style Transfer by Consistent Adaptive Patch Sampling

Oriel Frigo; Neus Sabater; Julie Delon; Pierre Hellier

This paper addresses the example-based stylization of videos. Style transfer aims at editing an image so that it matches the style of an example. This topic has been recently investigated by several researchers, both in the industry and in academia. The difficulty lies in how to capture the style of an image and correctly transferring it to a video. In this paper, we build on our previous work “Split and Match” for still pictures, based on adaptive patch synthesis. We address the issue of extending that particular technique to video, ensuring that the solution is spatially and temporally consistent. Results show that our video style transfer is visually plausible, while being very competitive regarding computation time and memory when compared to neural network approaches.

Collaboration


Dive into the Neus Sabater's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthieu Hog

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar

Julie Delon

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge