Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Jarabo is active.

Publication


Featured researches published by Adrian Jarabo.


international conference on computer graphics and interactive techniques | 2014

A framework for transient rendering

Adrian Jarabo; Julio Marco; Adolfo Muñoz; Raul Buisan; Wojciech Jarosz; Diego Gutierrez

Recent advances in ultra-fast imaging have triggered many promising applications in graphics and vision, such as capturing transparent objects, estimating hidden geometry and materials, or visualizing light in motion. There is, however, very little work regarding the effective simulation and analysis of transient light transport, where the speed of light can no longer be considered infinite. We first introduce the transient path integral framework, formally describing light transport in transient state. We then analyze the difficulties arising when considering the lights time-of-flight in the simulation (rendering) of images and videos. We propose a novel density estimation technique that allows reusing sampled paths to reconstruct time-resolved radiance, and devise new sampling strategies that take into account the distribution of radiance along time in participating media. We then efficiently simulate time-resolved phenomena (such as caustic propagation, fluorescence or temporal chromatic dispersion), which can help design future ultra-fast imaging devices using an analysis-by-synthesis approach, as well as to achieve a better understanding of the nature of light transport.


international conference on computer graphics and interactive techniques | 2014

How do people edit light fields

Adrian Jarabo; Belen Masia; Adrien Bousseau; Diego Gutierrez

We present a thorough study to evaluate different light field editing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light fields, which may make common image editing tasks become complex in light field space. We additionally investigate the potential benefits of using depth information when editing, and the limitations imposed by imperfect depth reconstruction using current techniques. We perform two different experiments, collecting both objective and subjective data from a varied number of editing tasks of increasing complexity based on local point-and-click tools. In the first experiment, we rely on perfect depth from synthetic light fields, and focus on simple edits. This allows us to gain basic insight on light field editing, and to design a more advanced editing interface. This is then used in the second experiment, employing real light fields with imperfect reconstructed depth, and covering more advanced editing tasks. Our study shows that users can edit light fields with our tested interface and tools, even in the presence of imperfect depth. They follow different workflows depending on the task at hand, mostly relying on a combination of different depth cues. Last, we confirm our findings by asking a set of artists to freely edit both real and synthetic light fields.


IEEE Journal of Selected Topics in Signal Processing | 2017

Light Field Image Processing: An Overview

Gaochang Wu; Belen Masia; Adrian Jarabo; Yuchen Zhang; Liangyong Wang; Qionghai Dai; Tianyou Chai; Yebin Liu

Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.


international conference on computer graphics and interactive techniques | 2015

To stylize or not to stylize?: the effect of shape and material stylization on the perception of computer-generated faces

Eduard Zell; Carlos Aliaga; Adrian Jarabo; Katja Zibrek; Diego Gutierrez; Rachel McDonnell; Mario Botsch

Virtual characters contribute strongly to the entire visuals of 3D animated films. However, designing believable characters remains a challenging task. Artists rely on stylization to increase appeal or expressivity, exaggerating or softening specific features. In this paper we analyze two of the most influential factors that define how a character looks: shape and material. With the help of artists, we design a set of carefully crafted stimuli consisting of different stylization levels for both parameters, and analyze how different combinations affect the perceived realism, appeal, eeriness, and familiarity of the characters. Moreover, we additionally investigate how this affects the perceived intensity of different facial expressions (sadness, anger, happiness, and surprise). Our experiments reveal that shape is the dominant factor when rating realism and expression intensity, while material is the key component for appeal. Furthermore our results show that realism alone is a bad predictor for appeal, eeriness, or attractiveness.


Computer Graphics Forum | 2015

A Biophysically-Based Model of the Optical Properties of Skin Aging

Jose A. Iglesias-Guitian; Carlos Aliaga; Adrian Jarabo; Diego Gutierrez

This paper presents a time‐varying, multi‐layered biophysically‐based model of the optical properties of human skin, suitable for simulating appearance changes due to aging. We have identified the key aspects that cause such changes, both in terms of the structure of skin and its chromophore concentrations, and rely on the extensive medical and optical tissue literature for accurate data. Our model can be expressed in terms of biophysical parameters, optical parameters commonly used in graphics and rendering (such as spectral absorption and scattering coefficients), or more intuitively with higher‐level parameters such as age, gender, skin care or skin type. It can be used with any rendering algorithm that uses diffusion profiles, and it allows to automatically simulate different types of skin at different stages of aging, avoiding the need for artistic input or costly capture processes. While the presented skin model is inspired on tissue optics studies, we also provided a simplified version valid for non‐diagnostic applications.


Visual Informatics | 2017

Recent advances in transient imaging: A computer graphics and vision perspective

Adrian Jarabo; Belen Masia; Julio Marco; Diego Gutierrez

Abstract Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation.


IEEE Transactions on Visualization and Computer Graphics | 2014

Effects of Approximate Filtering on the Appearance of Bidirectional Texture Functions

Adrian Jarabo; Hongzhi Wu; Julie Dorsey; Holly E. Rushmeier; Diego Gutierrez

The BTF data structure was a breakthrough for appearance modeling in computer graphics. More research is needed though to make BTFs practical in rendering applications. We present the first systematic study of the effects of Approximate filtering on the appearance of BTFs, by exploring the spatial, angular and temporal domains over a varied set of stimuli. We perform our initial experiments on simple geometry and lighting, and verify our observations on more complex settings. We consider multi-dimensional filtering versus conventional mipmapping, and find that multi-dimensional filtering produces superior results. We examine the tradeoff between under- and oversampling, and find that different filtering strategies can be applied in each domain, while maintaining visual equivalence with respect to a ground truth. For example, we find that preserving contrast is more important in static than dynamic images, indicating greater levels of spatial filtering are possible for animations. We find that filtering can be performed more aggressively in the angular domain than in the spatial. Additionally, we find that high-level visual descriptors of the BTF are linked to the perceptual performance of pre-filtered approximations. In turn, some of these high-level descriptors correlate with low level statistics of the BTF. We show six different practical applications of applying our findings to improving filtering, rendering and compression strategies.


Computer Graphics Forum | 2012

Crowd Light: Evaluating the Perceived Fidelity of Illuminated Dynamic Scenes

Adrian Jarabo; Tom Van Eyck; Veronica Sundstedt; Kavita Bala; Diego Gutierrez; Carol O'Sullivan

Rendering realistic illumination effects for complex animated scenes with many dynamic objects or characters is computationally expensive. Yet, it is not obvious how important such accurate lighting is for the overall perceived realism in these scenes. In this paper, we present a methodology to evaluate the perceived fidelity of illumination in scenes with dynamic aggregates, such as crowds, and explore several factors which may affect this perception. We focus in particular on evaluating how a popular spherical harmonics lighting method can be used to approximate realistic lighting of crowds. We conduct a series of psychophysical experiments to explore how a simple approach to approximating global illumination, using interpolation in the temporal domain, affects the perceived fidelity of dynamic scenes with high geometric, motion, and illumination complexity. We show that the complexity of the geometry and temporal properties of the crowd entities, the motion of the aggregate as a whole, the type of interpolation (i.e., of the direct and/or indirect illumination coefficients), and the presence or absence of colour all affect perceived fidelity. We show that high (i.e., above 75%) levels of perceived scene fidelity can be maintained while interpolating indirect illumination for intervals of up to 30 frames, resulting in a greater than three‐fold rendering speed‐up.


international conference on computer graphics and interactive techniques | 2012

Relativistic ultrafast rendering using time-of-flight imaging

Andreas Velten; Di Wu; Adrian Jarabo; Belen Masia; Christopher Barsi; Everett Lawson; Chinmaya Joshi; Diego Gutierrez; Moungi G. Bawendi; Ramesh Raskar

We capture ultrafast movies of light in motion and synthesize physically valid visualizations. The effective exposure time for each frame is under two picoseconds (ps). Capturing a 2D video with this time resolution is highly challenging, given the low signal-to-noise ratio (SNR) associated with ultrafast exposures, as well as the absence of 2D cameras that operate at this time scale. We re-purpose modern imaging hardware to record an average of ultrafast repeatable events that are synchronized to a streak tube, and we introduce reconstruction methods to visualize propagation of light pulses through macroscopic scenes.


Communications of The ACM | 2016

Imaging the propagation of light through scenes at picosecond resolution

Andreas Velten; Di Wu; Belen Masia; Adrian Jarabo; Christopher Barsi; Chinmaya Joshi; Everett Lawson; Moungi G. Bawendi; Diego Gutierrez; Ramesh Raskar

We present a novel imaging technique, which we call femto-photography, to capture and visualize the propagation of light through table-top scenes with an effective exposure time of 1.85 ps per frame. This is equivalent to a resolution of about one half trillion frames per second; between frames, light travels approximately just 0.5 mm. Since cameras with such extreme shutter speed obviously do not exist, we first re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensors spatial dimensions. We then introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through the scenes. Given this fast resolution and the finite speed of light, we observe that the camera does not necessarily capture the events in the same order as they occur in reality: we thus introduce the notion of time-unwarping between the cameras and the worlds space--time coordinate systems, to take this into account. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique has already motivated new forms of computational photography, as well as novel algorithms for the analysis and synthesis of light transport.

Collaboration


Dive into the Adrian Jarabo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Belen Masia

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar

Julio Marco

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Velten

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher Barsi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge