Gloria Haro
Pompeu Fabra University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gloria Haro.
NeuroImage | 2009
Christophe Lenglet; Jennifer S. W. Campbell; Maxime Descoteaux; Gloria Haro; Peter Savadjiev; Demian Wassermann; Alfred Anwander; Rachid Deriche; G. B. Pike; Guillermo Sapiro; Kaleem Siddiqi; Paul M. Thompson
In this article, we review recent mathematical models and computational methods for the processing of diffusion Magnetic Resonance Images, including state-of-the-art reconstruction of diffusion models, cerebral white matter connectivity analysis, and segmentation techniques. We focus on Diffusion Tensor Images (DTI) and Q-Ball Images (QBI).
Journal of Scientific Computing | 2008
Andrés Almansa; Coloma Ballester; Vicent Caselles; Gloria Haro
Abstract We propose in this paper a total variation based restoration model which incorporates the image acquisition model z=h*U+n (where z represents the observed sampled image, U is the ideal undistorted image, h denotes the blurring kernel and n is a white Gaussian noise) as a set of local constraints. These constraints, one for each pixel of the image, express the fact that the variance of the noise can be estimated from the residuals z−h*U if we use a neighborhood of each pixel. This is motivated by the fact that the usual inclusion of the image acquisition model as a single constraint expressing a bound for the variance of the noise does not give satisfactory results if we wish to simultaneously recover textured regions and obtain a good denoising of the image. We use Uzawa’s algorithm to minimize the total variation subject to the proposed family of local constraints and we display some experiments using this model.
International Journal of Computer Vision | 2008
Gloria Haro; Gregory Randall; Guillermo Sapiro
A framework for the regularized and robust estimation of non-uniform dimensionality and density in high dimensional noisy data is introduced in this work. This leads to learning stratifications, that is, mixture of manifolds representing different characteristics and complexities in the data set. The basic idea relies on modeling the high dimensional sample points as a process of translated Poisson mixtures, with regularizing restrictions, leading to a model which includes the presence of noise. The translated Poisson distribution is useful to model a noisy counting process, and it is derived from the noise-induced translation of a regular Poisson distribution. By maximizing the log-likelihood of the process counting the points falling into a local ball, we estimate the local dimension and density. We show that the sequence of all possible local countings in a point cloud formed by samples of a stratification can be modeled by a mixture of different translated Poisson distributions, thus allowing the presence of mixed dimensionality and densities in the same data set. With this statistical model, the parameters which best describe the data, estimated via expectation maximization, divide the points in different classes according to both dimensionality and density, together with an estimation of these quantities for each class. Theoretical asymptotic results for the model are presented as well. The presentation of the theoretical framework is complemented with artificial and real examples showing the importance of regularized stratification learning in high dimensional data analysis in general and computer vision and image analysis in particular.
Computer Vision and Image Understanding | 2008
Vicent Caselles; Gloria Haro; Guillermo Sapiro; Joan Verdera
Abstract : Geometric approaches for filling-in surface holes are introduced and studied in this paper. The basic idea is to represent the surface of interest in implicit form, and fill-in the holes with a scalar, or systems of, geometric partial differential equations often derived from optimization principles. These equations include a system for the joint interpolation of scalar and vector fields, a Laplacian-based minimization, a mean curvature diffusion flow, and an absolutely minimizing Lipschitz extension. The theoretical and computational framework, as well as examples with synthetic and real data, are presented in this paper.
international conference on image processing | 2009
Jaime Gallego; Montse Pardàs; Gloria Haro
In this paper we present a segmentation system for monocular video sequences with static camera that aims at foreground/background separation and tracking. We propose to combine a simple pixel-wise model for the background with a general purpose region based model for the foreground. The background is modeled using one Gaussian per pixel, thus achieving a precise and easy to update model. The foreground is modeled using a Gaussian mixture model with feature vectors consisting of the spatial (x, y) and colour (r, g, b) components. The spatial components of this model are updated using the expectation maximization algorithm after the classification of each frame. The background model is formulated in the 5 dimensional feature space in order to be able to apply a maximum a posteriori framework for the classification. The classification is done using a graph cut algorithm that allows taking into account neighborhood information. The results presented in the paper show the improvement of the system in situations where the foreground objects have similar colors to those of the background.
International Journal of Computer Vision | 2006
Gloria Haro; Marcelo Bertalmío; Vicent Caselles
In film production, it is sometimes not convenient or directly impossible to shoot some night scenes at night. The film budget, schedule or location may not allow it. In these cases, the scenes are shot at daytime, and the ‘night look’ is achieved by placing a blue filter in front of the lens and under-exposing the film. This technique, that the American film industry has used for many decades, is called ‘Day for Night’ (or ‘American Night’ in Europe.) But the images thus obtained don’t usually look realistic: they tend to be too bluish, and the objects’ brightness seems unnatural for night-light. In this article we introduce a digital Day for Night algorithm that achieves very realistic results. We use a set of very simple equations, based on real physical data and visual perception experimental data. To simulate the loss of visual acuity we introduce a novel diffusion Partial Differential Equation (PDE) which takes luminance into account and respects contrast, produces no ringing, is stable, very easy to implement and fast. The user only provides the original day image and the desired level of darkness of the result. The whole process from original day image to final night image is implemented in a few seconds, computations being mostly local.
Image and Vision Computing | 2010
Gloria Haro; Montse Pardís
Traditional shape from silhouette methods compute the 3D shape as the intersection of the back-projected silhouettes in the 3D space, the so called visual hull. However, silhouettes that have been obtained with background subtraction techniques often present miss-detection errors (produced by false negatives or occlusions) which produce incomplete 3D shapes. Our approach deals with miss-detections, false alarms, and noise in the silhouettes. We recover the voxel occupancy which describes the 3D shape by minimizing an energy based on an approximation of the error between the shape 2D projections and the silhouettes. Two variants of the projection - and as a result the energy - as a function of the voxel occupancy are proposed. One of these variants outperforms the other. The energy also includes a sparsity measure, a regularization term, and takes into account the visibility of the voxels in each view in order to handle self-occlusions.
Pattern Recognition Letters | 2012
Jaime Gallego; Montse Pardís; Gloria Haro
In this paper we present a foreground segmentation and tracking system for monocular static camera sequences and indoor scenarios that achieves correct foreground detection also in those complicated scenes where similarity between foreground and background colours appears. The work flow of the system is based on three main steps: An initial foreground detection performs a simple segmentation via Gaussian pixel color modeling and shadows removal. Next, a tracking step uses the foreground segmentation for identifying the objects, and tracks them using a modified mean shift algorithm. At the end, an enhanced foreground segmentation step is formulated into a Bayesian framework. For this aim, foreground and shadow candidates are used to construct probabilistic foreground and shadow models. The Bayesian framework combines a pixel-wise color background model with spatial-color models for the foreground and shadows. The final classification is performed using the graph-cut algorithm. The tracking step allows a correct updating of the probabilistic models, achieving a foreground segmentation that reduces the false negative and false positive detections, and obtaining a robust segmentation and tracking of each object of the scene.
Multiscale Modeling & Simulation | 2006
Andrés Almansa; Vicent Caselles; Gloria Haro; Bernard Rougé
We propose an algorithm to solve a problem in image restoration which considers several different aspects of it, namely irregular sampling, denoising, deconvolution, and zooming. Our algorithm is b...
Pattern Recognition | 2012
Gloria Haro
This paper proposes a shape-from-silhouette algorithm that is robust to inconsistent silhouettes, often common in real applications due to occlusions, errors in the background subtraction, noise or even calibration errors. The recovery of the shape that best fits the available data (silhouettes) is formulated as a continuous energy minimization problem. The energy is based on the error between the silhouettes and the shape plus a regularization term. Thanks to the characterization of the visible surface in each view as a function of the shape, we consider the error in the volume space. As a result, we obtain an iterative volume-based algorithm that evolves the initial shape to the shape that is in general agreement with the silhouettes, thus being robust to errors in the silhouettes. We have implemented the proposed algorithm in a graphics card processor with parallel computing allowing reduced computational times. The obtained results compare favorably to those of the state of the art.