Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elena Garces is active.

Publication


Featured researches published by Elena Garces.


Computer Graphics Forum | 2012

Intrinsic Images by Clustering

Elena Garces; Adolfo Muñoz; Jorge Lopez-Moreno; Diego Gutierrez

Decomposing an input image into its intrinsic shading and reflectance components is a long‐standing ill‐posed problem. We present a novel algorithm that requires no user strokes and works on a single image. Based on simple assumptions about its reflectance and luminance, we first find clusters of similar reflectance in the image, and build a linear system describing the connections and relations between them. Our assumptions are less restrictive than widely‐adopted Retinex‐based approaches, and can be further relaxed in conflicting situations. The resulting system is robust even in the presence of areas where our assumptions do not hold. We show a wide variety of results, including natural images, objects from the MIT dataset and texture images, along with several applications, proving the versatility of our method.


international conference on computer graphics and interactive techniques | 2014

Intrinsic video and applications

Genzhi Ye; Elena Garces; Yebin Liu; Qionghai Dai; Diego Gutierrez

We present a method to decompose a video into its intrinsic components of reflectance and shading, plus a number of related example applications in video editing such as segmentation, stylization, material editing, recolorization and color transfer. Intrinsic decomposition is an ill-posed problem, which becomes even more challenging in the case of video due to the need for temporal coherence and the potentially large memory requirements of a global approach. Additionally, user interaction should be kept to a minimum in order to ensure efficiency. We propose a probabilistic approach, formulating a Bayesian Maximum a Posteriori problem to drive the propagation of clustered reflectance values from the first frame, and defining additional constraints as priors on the reflectance and shading. We explicitly leverage temporal information in the video by building a causal-anticausal, coarse-to-fine iterative scheme, and by relying on optical flow information. We impose no restrictions on the input video, and show examples representing a varied range of difficult cases. Our method is the first one designed explicitly for video; moreover, it naturally ensures temporal consistency, and compares favorably against the state of the art in this regard.


international conference on computer graphics and interactive techniques | 2014

A similarity measure for illustration style

Elena Garces; Aseem Agarwala; Diego Gutierrez; Aaron Hertzmann

This paper presents a method for measuring the similarity in style between two pieces of vector art, independent of content. Similarity is measured by the differences between four types of features: color, shading, texture, and stroke. Feature weightings are learned from crowdsourced experiments. This perceptual similarity enables style-based search. Using our style-based search feature, we demonstrate an application that allows users to create stylistically-coherent clip art mash-ups.


Computer Graphics Forum | 2013

Multiple Light Source Estimation in a Single Image

Jorge Lopez-Moreno; Elena Garces; Sunil Hadap; Erik Reinhard; Diego Gutierrez

Many high‐level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image‐based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill‐posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Similarly, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles. By means of a new data set of ground‐truth data and images, we demonstrate that our approach produces more robust and accurate results, and show its versatility through novel applications such as image compositing and analysis.


Multimedia Tools and Applications | 2017

Style-based exploration of illustration datasets

Elena Garces; Aseem Agarwala; Aaron Hertzmann; Diego Gutierrez

Searching by style in illustration data sets is a particular problem in Information Retrieval which has received little attention so far. One of its main problems is that the perception of style is highly subjective, which makes labeling styles a very difficult task. Despite being difficult to predict computationally, certain properties such as colorfulness, line style or shading can be successfully captured by existing style metrics. However, there is little knowledge about how we distinguish between different styles and how these metrics can be used to guide users in style-based interactions. In this paper, we propose several contributions towards a better comprehension of illustration style and its usefulness for data exploration and retrieval. First, we provide new insights about how we perceive style in illustration. Second, we evaluate a handmade style clustering of clip art pieces with an existing style metric to analyze how this metric aligns with expert knowledge. Finally, we propose a method for efficient navigation and exploration of large clip art data sets which takes into account both semantic labeling of the data and its style. Our approach combines hierarchical clustering with dimensionality reduction techniques, and strategic sampling to obtain intuitive visualizations and useful visualizations.


CEIG | 2014

Depth from a Single Image Through User Interaction

Angeles López; Elena Garces; Diego Gutierrez

In this paper we present a method to obtain a depth map from a single image of a scene by exploiting both image content and user interaction. Assuming that regions with low gradients will have similar depth values, we formulate the problem as an optimization process across a graph, where pixels are considered as nodes and edges between neighbouring pixels are assigned weights based on the image gradient. Starting from a number of userdefined constraints, depth values are propagated between highly connected nodes i.e. with small gradients. Such constraints include, for example, depth equalities and inequalities between pairs of pixels, and may include some information about perspective. This framework provides a depth map of the scene, which is useful for a number of applications.


Computer Graphics Forum | 2017

Intrinsic Light Field Images

Elena Garces; Jose I. Echevarria; Wen Zhang; Hongzhi Wu; Kun Zhou; Diego Gutierrez

We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to two‐dimensional (2D) single images and videos, a light field is a 4D structure that captures non‐integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state‐of‐the‐art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show that our method provides 4D intrinsic decompositions difficult to achieve with previous state‐of‐the‐art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image/video decomposition methods on light field images.


Computer Graphics Forum | 2017

Convolutional Sparse Coding for Capturing High-Speed Video Content

Ana Serrano; Elena Garces; Belen Masia; Diego Gutierrez

Video capture is limited by the trade‐off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial and temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade‐off remains. The recent introduction of compressive sensing and sparse reconstruction techniques allows for the capture of single‐shot high‐speed video, by coding the temporal information in a single frame, and then reconstructing the full video sequence from this single‐coded image and a trained dictionary of image patches. In this paper, we first analyse this approach, and find insights that help improve the quality of the reconstructed videos. We then introduce a novel technique, based on convolutional sparse coding (CSC), and show how it outperforms the state‐of‐the‐art, patch‐based approach in terms of flexibility and efficiency, due to the convolutional nature of its filter banks. The key idea for CSC high‐speed video acquisition is extending the basic formulation by imposing an additional constraint in the temporal dimension, which enforces sparsity of the first‐order derivatives over time.


arXiv: Computer Vision and Pattern Recognition | 2017

Transfer Learning for Illustration Classification

Manuel Lagunas; Elena Garces

The field of image classification has shown an outstanding success thanks to the development of deep learning techniques. Despite the great performance obtained, most of the work has focused on natural images ignoring other domains like artistic depictions. In this paper, we use transfer learning techniques to propose a new classification network with better performance in illustration images. Starting from the deep convolutional network VGG19, pre-trained with natural images, we propose two novel models which learn object representations in the new domain. Our optimized network will learn new low-level features of the images (colours, edges, textures) while keeping the knowledge of the objects and shapes that it already learned from the ImageNet dataset. Thus, requiring much less data for the training. We propose a novel dataset of illustration images labelled by content where our optimized architecture achieves


CEIG | 2015

Low Cost Decomposition of Direct and Global Illumination in Real Scenes

Elena Garces; Fernando Martin; Diego Gutierrez

\textbf{86.61\%}

Collaboration


Dive into the Elena Garces's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge