Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Belen Masia is active.

Publication


Featured researches published by Belen Masia.


international conference on computer graphics and interactive techniques | 2013

Femto-photography: capturing and visualizing the propagation of light

Andreas Velten; Di Wu; Adrian Jarabo; Belen Masia; Christopher Barsi; Chinmaya Joshi; Everett Lawson; Moungi G. Bawendi; Diego Gutierrez; Ramesh Raskar

We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensors spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the cameras and the worlds space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.


Computers & Graphics | 2013

Special Section on Advanced Displays: A survey on computational displays: Pushing the boundaries of optics, computation, and perception

Belen Masia; Gordon Wetzstein; Piotr Didyk; Diego Gutierrez

Display technology has undergone great progress over the last few years. From higher contrast to better temporal resolution or more accurate color reproduction, modern displays are capable of showing images which are much closer to reality. In addition to this trend, we have recently seen the resurrection of stereo technology, which in turn fostered further interest on automultiscopic displays. These advances share the common objective of improving the viewing experience by means of a better reconstruction of the plenoptic function along any of its dimensions. In addition, one usual strategy is to leverage known aspects of the human visual system (HVS) to provide apparent enhancements, beyond the physical limits of the display. In this survey, we analyze these advances, categorize them along the dimensions of the plenoptic function, and present the relevant aspects of human perception on which they rely.


international conference on computer graphics and interactive techniques | 2009

Evaluation of reverse tone mapping through varying exposure conditions

Belen Masia; Sandra Agustin; Roland W. Fleming; Olga Sorkine; Diego Gutierrez

Most existing image content has low dynamic range (LDR), which necessitates effective methods to display such legacy content on high dynamic range (HDR) devices. Reverse tone mapping operators (rTMOs) aim to take LDR content as input and adjust the contrast intelligently to yield output that recreates the HDR experience. In this paper we show that current rTMO approaches fall short when the input image is not exposed properly. More specifically, we report a series of perceptual experiments using a Brightside HDR display and show that, while existing rTMOs perform well for under-exposed input data, the perceived quality degrades substantially with over-exposure, to the extent that in some cases subjects prefer the LDR originals to images that have been treated with rTMOs. We show that, in these cases, a simple rTMO based on gamma expansion avoids the errors introduced by other methods, and propose a method to automatically set a suitable gamma value for each image, based on the image key and empirical data. We validate the results both by means of perceptual experiments and using a recent image quality metric, and show that this approach enhances visible details without causing artifacts in incorrectly-exposed regions. Additionally, we perform another set of experiments which suggest that spatial artifacts introduced by rTMOs are more disturbing than inaccuracies in the expanded intensities. Together, these findings suggest that when the quality of the input data is unknown, reverse tone mapping should be handled with simple, non-aggressive methods to achieve the desired effect.


International Journal of Computer Vision | 2014

Decomposing Global Light Transport Using Time of Flight Imaging

Di Wu; Andreas Velten; Matthew O'Toole; Belen Masia; Amit K. Agrawal; Qionghai Dai; Ramesh Raskar

Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using the high temporal resolution information of time of flight (ToF) images. With pulsed scene illumination, the time profile at each pixel of these images separates different illumination components by their finite travel time and encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for five computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, performing edge detection using ToF images and rendering novel images of the captured scene with adjusted amounts of subsurface scattering.


Computers & Graphics | 2013

Special Section on Advanced Displays: Display adaptive 3D content remapping

Belen Masia; Gordon Wetzstein; Carlos Aliaga; Ramesh Raskar; Diego Gutierrez

Glasses-free automultiscopic displays are on the verge of becoming a standard technology in consumer products. These displays are capable of producing the illusion of 3D content without the need of any additional eyewear. However, due to limitations in angular resolution, they can only show a limited depth of field, which translates into blurred-out areas whenever an object extrudes beyond a certain depth. Moreover, the blurring is device-specific, due to the different constraints of each display. We introduce a novel display-adaptive light field retargeting method, to provide high-quality, blur-free viewing experiences of the same content on a variety of display types, ranging from hand-held devices to movie theaters. We pose the problem as an optimization, which aims at modifying the original light field so that the displayed content appears sharp while preserving the original perception of depth. In particular, we run the optimization on the central view and use warping to synthesize the rest of the light field. We validate our method using existing objective metrics for both image quality (blur) and perceived depth. The proposed framework can also be applied to retargeting disparities in stereoscopic image displays, supporting both dichotomous and non-dichotomous comfort zones.


international conference on computer graphics and interactive techniques | 2013

A metric of visual comfort for stereoscopic motion

Song-Pei Du; Belen Masia; Shi-Min Hu; Diego Gutierrez

We propose a novel metric of visual comfort for stereoscopic motion, based on a series of systematic perceptual experiments. We take into account disparity, motion in depth, motion on the screen plane, and the spatial frequency of luminance contrast. We further derive a comfort metric to predict the comfort of short stereoscopic videos. We validate it on both controlled scenes and real videos available on the internet, and show how all the factors we take into account, as well as their interactions, affect viewing comfort. Last, we propose various applications that can benefit from our comfort measurements and metric.


international conference on computer graphics and interactive techniques | 2014

How do people edit light fields

Adrian Jarabo; Belen Masia; Adrien Bousseau; Diego Gutierrez

We present a thorough study to evaluate different light field editing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light fields, which may make common image editing tasks become complex in light field space. We additionally investigate the potential benefits of using depth information when editing, and the limitations imposed by imperfect depth reconstruction using current techniques. We perform two different experiments, collecting both objective and subjective data from a varied number of editing tasks of increasing complexity based on local point-and-click tools. In the first experiment, we rely on perfect depth from synthetic light fields, and focus on simple edits. This allows us to gain basic insight on light field editing, and to design a more advanced editing interface. This is then used in the second experiment, employing real light fields with imperfect reconstructed depth, and covering more advanced editing tasks. Our study shows that users can edit light fields with our tested interface and tools, even in the presence of imperfect depth. They follow different workflows depending on the task at hand, mostly relying on a combination of different depth cues. Last, we confirm our findings by asking a set of artists to freely edit both real and synthetic light fields.


Computer Graphics Forum | 2016

Convolutional sparse coding for high dynamic range imaging

Ana Serrano; Felix Heide; Diego Gutierrez; Gordon Wetzstein; Belen Masia

Current HDR acquisition techniques are based on either (i) fusing multibracketed, low dynamic range (LDR) images, (ii) modifying existing hardware and capturing different exposures simultaneously with multiple sensors, or (iii) reconstructing a single image with spatially‐varying pixel exposures. In this paper, we propose a novel algorithm to recover high‐quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently‐introduced ideas of convolutional sparse coding (CSC); this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher‐quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform.


IEEE Journal of Selected Topics in Signal Processing | 2017

Light Field Image Processing: An Overview

Gaochang Wu; Belen Masia; Adrian Jarabo; Yuchen Zhang; Liangyong Wang; Qionghai Dai; Tianyou Chai; Yebin Liu

Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.


Computer Graphics Forum | 2012

Perceptually Optimized Coded Apertures for Defocus Deblurring

Belen Masia; Lara Presa; Adrian Corrales; Diego Gutierrez

The field of computational photography, and in particular the design and implementation of coded apertures, has yielded impressive results in the last years. In this paper we introduce perceptually optimized coded apertures for defocused deblurring. We obtain near‐optimal apertures by means of optimization, with a novel evaluation function that includes two existing image quality perceptual metrics. These metrics favour results where errors in the final deblurred images will not be perceived by a human observer. Our work improves the results obtained with a similar approach that only takes into account the L2 metric in the evaluation function.

Collaboration


Dive into the Belen Masia's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ana Serrano

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Velten

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Lara Presa

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher Barsi

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge