Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catalina Sbert is active.

Publication


Featured researches published by Catalina Sbert.


IEEE Transactions on Image Processing | 1998

An axiomatic approach to image interpolation

Vicent Caselles; Jean-Michel Morel; Catalina Sbert

We discuss possible algorithms for interpolating data given in a set of curves and/or points in the plane. We propose a set of basic assumptions to be satisfied by the interpolation algorithms which lead to a set of models in terms of possibly degenerate elliptic partial differential equations. The absolute minimal Lipschitz extension model (AMLE) is singled out and studied in more detail. We show experiments suggesting a possible application, the restoration of images with poor dynamic range.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Minimal surfaces based object segmentation

Vincent Caselles; Ron Kimmel; Guillermo Sapiro; Catalina Sbert

A geometric approach for 3D object segmentation and representation is presented. The segmentation is obtained by deformable surfaces moving towards the objects to be detected in the 3D image. The model is based on curvature motion and the computation of surfaces with minimal areas, better known as minimal surfaces. The space where the surfaces are computed is induced from the 3D image (volumetric data) in which the objects are to be detected. The model links between classical deformable surfaces obtained via energy minimization, and intrinsic ones derived from curvature based flows. The new approach is stable, robust, and automatically handles changes in the surface topology during the deformation.


IEEE Transactions on Image Processing | 2010

A PDE Formalization of Retinex Theory

Jean-Michel Morel; Ana Belén Petro; Catalina Sbert

In 1964, Edwin H. Land formulated the Retinex theory, the first attempt to simulate and explain how the human visual system perceives color. His theory and an extension, the “reset Retinex” were further formalized by Land and McCann. Several Retinex algorithms have been developed ever since. These color constancy algorithms modify the RGB values at each pixel to give an estimate of the color sensation without a priori information on the illumination. Unfortunately, the Retinex Land-McCann original algorithm is both complex and not fully specified. Indeed, this algorithm computes at each pixel an average of a very large set of paths on the image. For this reason, Retinex has received several interpretations and implementations which, among other aims, attempt to tune down its excessive complexity. In this paper, it is proved that if the paths are assumed to be symmetric random walks, the Retinex solutions satisfy a discrete Poisson equation. This formalization yields an exact and fast implementation using only two FFTs. Several experiments on color images illustrate the effectiveness of the Retinex original theory.


IEEE Transactions on Image Processing | 2009

Self-Similarity Driven Color Demosaicking

Antoni Buades; Bartomeu Coll; Jean-Michel Morel; Catalina Sbert

Demosaicking is the process by which from a matrix of colored pixels measuring only one color component per pixel, red, green, or blue, one can infer a whole color information at each pixel. This inference requires a deep understanding of the interaction between colors, and the involvement of image local geometry. Although quite successful in making such inferences with very small relative error, state-of-the-art demosaicking methods fail when the local geometry cannot be inferred from the neighboring pixels. In such a case, which occurs when thin structures or fine periodic patterns were present in the original, state-of-the-art methods can create disturbing artifacts, known as zipper effect, blur, and color spots. The aim of this paper is to show that these artifacts can be avoided by involving the image self-similarity to infer missing colors. Detailed experiments show that a satisfactory solution can be found, even for the most critical cases. Extensive comparisons with state-of-the-art algorithms will be performed on two different classic image databases.


european conference on computer vision | 1996

Three Dimensional Object Modeling via Minimal Surfaces

Vicent Caselles; Ron Kimmel; Guillermo Sapiro; Catalina Sbert

A novel geometric approach for 3D object segmentation and representation is presented. The scheme is based on geometric deformable surfaces moving towards the objects to be detected. We show that this model is equivalent to the computation of surfaces of minimal area, better known as ‘minimal surfaces,’ in a Riemannian space. This space is defined by a metric induced from the 3D image (volumetric data) in which the objects are to be detected. The model shows the relation between classical deformable surfaces obtained via energy minimization, and geometric ones derived from curvature based flows. The new approach is stable, robust, and automatically handles changes in the surface topology during the deformation. Based on an efficient numerical algorithm for surface evolution, we present examples of object detection in real and synthetic images.


Image Processing On Line | 2011

Simplest Color Balance

Nicolas Limare; Jose Luis Lisani; Jean-Michel Morel; Ana Belén Petro; Catalina Sbert

In this paper we present the simplest possible color balance algorithm. The assumption underlying this algorithm is that the highest values of R, G, B observed in the image must correspond to white, and the lowest values to obscurity. The algorithm simply stretches, as much as it can, the values of the three channels Red, Green, Blue (R, G, B), so that they occupy the maximal possible range [0, 255] by applying an affine transform ax+b to each channel. Since many images contain a few aberrant pixels that already occupy the 0 and 255 values, the proposed method saturates a small percentage of the pixels with the highest values to 255 and a small percentage of the pixels with the lowest values to 0, before applying the affine transform. Source Code The source code (ANSI C), its documentation, and the online demo are accessible at the IPOL web page of this article.


Image Processing On Line | 2011

Retinex Poisson Equation: a Model for Color Perception

Nicolas Limare; Ana Belén Petro; Catalina Sbert; Jean-Michel Morel

In 1964 Edwin H. Land formulated the Retinex theory, the first attempt to simulate and explain how the human visual system perceives color. Unfortunately, the Retinex Land-McCann original algorithm is both complex and not fully specified. Indeed, this algorithm computes at each pixel an average of a very large set of paths on the image. For this reason, Retinex has received several interpretations and implementations which, among other aims, attempt to tune down its excessive complexity. But, Morel et al. have shown that the original Retinex algorithm can be formalized as a (discrete) partial differential equation. This article describes the PDE-Retinex, a fast implementation of the Land-McCann original theory using only two DFT’s.


Siam Journal on Imaging Sciences | 2014

A Nonlocal Variational Model for Pansharpening Image Fusion

Joan Duran; Antoni Buades; Bartomeu Coll; Catalina Sbert

Pansharpening refers to the fusion process of inferring a high-resolution multispectral image from a high-resolution panchromatic image and a low-resolution multispectral one. In this paper we propose a new variational method for pansharpening which incorporates a nonlocal regularization term and two fidelity terms, one describing the relation between the panchromatic image and the high-resolution spectral channels and the other one preserving the colors from the low-resolution modality. The nonlocal term is based on the image self-similarity principle applied to the panchromatic image. The existence and uniqueness of minimizer for the described functional is proved in a suitable space of weighted integrable functions. Although quite successful in terms of relative error, state-of-the-art pansharpening methods introduce relevant color artifacts. These spectral distortions can be significantly reduced by involving the image self-similarity. Extensive comparisons with state-of-the-art algorithms are performed.


Proceedings of SPIE | 2009

Fast implementation of color constancy algorithms

Jean-Michel Morel; Ana Belén Petro; Catalina Sbert

Color constancy is a feature of the human color perception system which ensures that the perceived color of objects remains relatively constant under varying illumination conditions, and therefore closer to the physical reflectance. This perceptual effect, discovered by Helmholtz, was formalized by Land and McCann in 1971, who formulated the Retinex theory. Several theories have ever since been developed, known as Retinex or color constancy algorithms. In particular an important historic variant was proposed by Horn in 1974 and another by Blake in 1985. These algorithms modify the RGB values at each pixel in an attempt to give an estimate of the physical color. Lands original algorithm is both complex and not fully specified. It computes at each pixel a stochastic integral on an unspecified set of paths on the image. For this reason, Lands algorithm has received many recent interpretations and implementations that attempt to tune down the excessive complexity. In this paper, a fast and exact FFT implementation of Lands, Horn and Blake theories is described. It permits for the first time a rigorous comparison of these algorithms. A slight variant of these three algorithms will be proposed, that makes them into contrast enhancing algorithms. Several comparative experiments on color images illustrate the superiority of Lands model to manipulate image contrast.


Siam Journal on Imaging Sciences | 2016

Collaborative Total Variation: A General Framework for Vectorial TV Models

Joan Duran; Michael Moeller; Catalina Sbert; Daniel Cremers

Even after two decades, the total variation (TV) remains one of the most popular regularizations for image processing problems and has sparked a tremendous amount of research, particularly on moving from scalar to vector-valued functions. In this paper, we consider the gradient of a color image as a three-dimensional matrix or tensor with dimensions corresponding to the spatial extent, the intensity differences between neighboring pixels, and the spectral channels. The smoothness of this tensor is then measured by taking different norms along the different dimensions. Depending on the types of these norms, one obtains very different properties of the regularization, leading to novel models for color images. We call this class of regularizations collaborative total variation (CTV). On the theoretical side, we characterize the dual norm, the subdifferential, and the proximal mapping of the proposed regularizers. We further prove, with the help of the generalized concept of singular vectors, that an

Collaboration


Dive into the Catalina Sbert's collaboration.

Top Co-Authors

Avatar

Ana Belén Petro

University of the Balearic Islands

View shared research outputs
Top Co-Authors

Avatar

Jean-Michel Morel

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar

Joan Duran

University of the Balearic Islands

View shared research outputs
Top Co-Authors

Avatar

Bartomeu Coll

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antoni Buades

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Kimmel

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jose Luis Lisani

University of the Balearic Islands

View shared research outputs
Top Co-Authors

Avatar

Bartomeu Coll

École normale supérieure de Cachan

View shared research outputs
Researchain Logo
Decentralizing Knowledge