Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raanan Fattal is active.

Publication


Featured researches published by Raanan Fattal.


international conference on computer graphics and interactive techniques | 2002

Gradient domain high dynamic range compression

Raanan Fattal; Dani Lischinski; Michael Werman

We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynamic range image is then obtained by solving a Poisson equation on the modified gradient field. Our results demonstrate that the method is capable of drastic dynamic range compression, while preserving fine details and avoiding common artifacts, such as halos, gradient reversals, or loss of local contrast. The method is also able to significantly enhance ordinary images by bringing out detail in dark regions.


international conference on computer graphics and interactive techniques | 2008

Single image dehazing

Raanan Fattal

In this paper we present a new method for estimating the optical transmission in hazy scenes given a single input image. Based on this estimation, the scattered light is eliminated to increase scene visibility and recover haze-free scene contrasts. In this new approach we formulate a refined image formation model that accounts for surface shading in addition to the transmission function. This allows us to resolve ambiguities in the data by searching for a solution in which the resulting shading and transmission functions are locally statistically uncorrelated. A similar principle is used to estimate the color of the haze. Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.


international conference on computer graphics and interactive techniques | 2008

Edge-preserving decompositions for multi-scale tone and detail manipulation

Zeev Farbman; Raanan Fattal; Dani Lischinski; Richard Szeliski

Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts. In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current basedetail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications.


ACM Transactions on Graphics | 2011

Image and video upscaling from local self-examples

Gilad Freedman; Raanan Fattal

We propose a new high-quality and efficient single-image upscaling technique that extends existing example-based super-resolution frameworks. In our approach we do not rely on an external example database or use the whole input image as a source for example patches. Instead, we follow a local self-similarity assumption on natural images and extract patches from extremely localized regions in the input image. This allows us to reduce considerably the nearest-patch search time without compromising quality in most images. Tests, that we perform and report, show that the local self-similarity assumption holds better for small scaling factors where there are more example patches of greater relevance. We implement these small scalings using dedicated novel nondyadic filter banks, that we derive based on principles that model the upscaling process. Moreover, the new filters are nearly biorthogonal and hence produce high-resolution images that are highly consistent with the input image without solving implicit back-projection equations. The local and explicit nature of our algorithm makes it simple, efficient, and allows a trivial parallel implementation on a GPU. We demonstrate the new method ability to produce high-quality resolution enhancement, its application to video sequences with no algorithmic modification, and its efficiency to perform real-time enhancement of low-resolution video standard into recent high-definition formats.


international conference on computer graphics and interactive techniques | 2007

Image upsampling via imposed edge statistics

Raanan Fattal

In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the methods capability to produce images in which sharp edges are successfully reconstructed.


international conference on computer graphics and interactive techniques | 2007

Multiscale shape and detail enhancement from multi-light image collections

Raanan Fattal; Maneesh Agrawala; Szymon Rusinkiewicz

We present a new image-based technique for enhancing the shape and surface details of an object. The input to our system is a small set of photographs taken from a fixed viewpoint, but under varying lighting conditions. For each image we compute a multiscale decomposition based on the bilateral filter and then reconstruct an enhanced image that combines detail information at each scale across all the input images. Our approach does not require any information about light source positions, or camera calibration, and can produce good results with 3 to 5 input images. In addition our system provides a few high-level parameters for controlling the amount of enhancement and does not require pixel-level user input. We show that the bilateral filter is a good choice for our multiscale algorithm because it avoids the halo artifacts commonly associated with the traditional Laplacian image pyramid. We also develop a new scheme for computing our multiscale bilateral decomposition that is simple to implement, fast O(N2 log N) and accurate.


international conference on computer graphics and interactive techniques | 2009

Edge-avoiding wavelets and their applications

Raanan Fattal

We propose a new family of second-generation wavelets constructed using a robust data-prediction lifting scheme. The support of these new wavelets is constructed based on the edge content of the image and avoids having pixels from both sides of an edge. Multi-resolution analysis, based on these new edge-avoiding wavelets, shows a better decorrelation of the data compared to common linear translation-invariant multi-resolution analyses. The reduced inter-scale correlation allows us to avoid halo artifacts in band-independent multi-scale processing without taking any special precautions. We thus achieve nonlinear data-dependent multi-scale edge-preserving image filtering and processing at computation times which are linear in the number of image pixels. The new wavelets encode, in their shape, the smoothness information of the image at every scale. We use this to derive a new edge-aware interpolation scheme that achieves results, previously computed by solving an inhomogeneous Laplace equation, through an explicit computation. We thus avoid the difficulties in solving large and poorly-conditioned systems of equations. We demonstrate the effectiveness of the new wavelet basis for various computational photography applications such as multi-scale dynamic-range compression, edge-preserving smoothing and detail enhancement, and image colorization.


international conference on computer graphics and interactive techniques | 2007

Efficient simulation of inextensible cloth

Rony Goldenthal; David Harmon; Raanan Fattal; Michel Bercovier; Eitan Grinspun

Many textiles do not noticeably stretch under their own weight. Unfortunately, for better performance many cloth solvers disregard this fact. We propose a method to obtain very low strain along the warp and weft direction using Constrained Lagrangian Mechanics and a novel fast projection method. The resulting algorithm acts as a velocity filter that easily integrates into existing simulation code.


international conference on computer graphics and interactive techniques | 2004

Target-driven smoke animation

Raanan Fattal; Dani Lischinski

In this paper we present a new method for efficiently controlling animated smoke. Given a sequence of target smoke states, our method generates a smoke simulation in which the smoke is driven towards each of these targets in turn, while exhibiting natural-looking interesting smoke-like behavior. This control is made possible by two new terms that we add to the standard flow equations: (i) a driving force term that causes the fluid to carry the smoke towards a particular target, and (ii) a smoke gathering term that prevents the smoke from diffusing too much. These terms are explicitly defined by the instantaneous state of the system at each simulation timestep. Thus, no expensive optimization is required, allowing complex smoke animations to be generated with very little additional cost compared to ordinary flow simulations.


international conference on computer graphics and interactive techniques | 2010

Diffusion maps for edge-aware image editing

Zeev Farbman; Raanan Fattal; Dani Lischinski

Edge-aware operations, such as edge-preserving smoothing and edge-aware interpolation, require assessing the degree of similarity between pairs of pixels, typically defined as a simple monotonic function of the Euclidean distance between pixel values in some feature space. In this work we introduce the idea of replacing these Euclidean distances with diffusion distances, which better account for the global distribution of pixels in their feature space. These distances are approximated using diffusion maps: a set of the dominant eigenvectors of a large affinity matrix, which may be computed efficiently by sampling a small number of matrix columns (the Nystrom method). We demonstrate the benefits of using diffusion distances in a variety of image editing contexts, and explore the use of diffusion maps as a tool for facilitating the creation of complex selection masks. Finally, we present a new analysis that establishes a connection between the spatial interaction range between two pixels, and the number of samples necessary for accurate Nystrom approximations.

Collaboration


Dive into the Raanan Fattal's collaboration.

Top Co-Authors

Avatar

Dani Lischinski

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Raz Kupferman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Zeev Farbman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Daniel Lischinski

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Michael Werman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Amit Goldstein

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Gilad Freedman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Inbar Huberman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge