Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anat Levin is active.

Publication


Featured researches published by Anat Levin.


international conference on computer graphics and interactive techniques | 2007

Image and depth from a conventional camera with a coded aperture

Anat Levin; Rob Fergus; William T. Freeman

A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.


international conference on computer graphics and interactive techniques | 2004

Colorization using optimization

Anat Levin; Dani Lischinski; Yair Weiss

Colorization is a computer-assisted process of adding color to a monochrome image or movie. The process typically involves segmenting images into regions and tracking these regions across image sequences. Neither of these tasks can be performed reliably in practice; consequently, colorization requires considerable user intervention and remains a tedious, time-consuming, and expensive task.In this paper we present a simple colorization method that requires neither precise image segmentation, nor accurate region tracking. Our method is based on a simple premise; neighboring pixels in space-time that have similar intensities should have similar colors. We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques. In our approach an artist only needs to annotate the image with a few color scribbles, and the indicated colors are automatically propagated in both space and time to produce a fully colorized image or sequence. We demonstrate that high quality colorizations of stills and movie clips may be obtained from a relatively modest amount of user input.


computer vision and pattern recognition | 2009

Understanding and evaluating blind deconvolution algorithms

Anat Levin; Yair Weiss; William T. Freeman

Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. On the other hand we show that since the kernel size is often smaller than the image size a MAP estimation of the kernel alone can be well constrained and accurately recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. We have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrates that the shift-invariant blur assumption made by most algorithms is often violated.


european conference on computer vision | 2004

Seamless Image Stitching in the Gradient Domain

Anat Levin; Assaf Zomet; Shmuel Peleg; Yair Weiss

Image stitching is used to combine several individual images having some overlap into a composite image. The quality of image stitching is measured by the similarity of the stitched image to each of the input images, and by the visibility of the seam between the stitched images.


computer vision and pattern recognition | 2011

Efficient marginal likelihood optimization in blind deconvolution

Anat Levin; Yair Weiss; William T. Freeman

In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, k\y) and not only its mode. This leads to a distinction between MAPx, k strategies which estimate the mode pair x, k and often lead to undesired results, and MAPk strategies which select the best k while marginalizing over all possible x images. The MAPk principle is significantly more robust than the MAPx, k one, yet, it involves a challenging marginalization over latent images. As a result, MAPk techniques are considered complicated, and have not been widely exploited. This paper derives a simple approximated MAPk algorithm which involves only a modest modification of common MAPx, k algorithms. We show that MAPk can, in fact, be optimized easily, with no additional computational complexity.


european conference on computer vision | 2006

Learning to combine bottom-up and top-down segmentation

Anat Levin; Yair Weiss

Bottom-up segmentation based only on low-level cues is a notoriously difficult problem. This difficulty has lead to recent top-down segmentation algorithms that are based on class-specific image information. Despite the success of top-down algorithms, they often give coarse segmentations that can be significantly refined using low-level cues. This raises the question of how to combine both top-down and bottom-up cues in a principled manner. In this paper we approach this problem using supervised learning. Given a training set of ground truth segmentations we train a fragment-based segmentation algorithm which takes into account both bottom-up and top-down cues simultaneously, in contrast to most existing algorithms which train top-down and bottom-up modules separately. We formulate the problem in the framework of Conditional Random Fields (CRF) and derive a novel feature induction algorithm for CRF, which allows us to efficiently search over thousands of candidate fragments. Whereas pure top-down algorithms often require hundreds of fragments, our simultaneous learning procedure yields algorithms with a handful of fragments that are combined with low-level cues to efficiently compute high quality segmentations.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Spectral Matting

Anat Levin; A. Rav Acha; Dani Lischinski

We present spectral matting: a new approach to natural image matting that automatically computes a basis set of fuzzy matting components from the smallest eigenvectors of a suitably defined Laplacian matrix. Thus, our approach extends spectral segmentation techniques, whose goal is to extract hard segments, to the extraction of soft matting components. These components may then be used as building blocks to easily construct semantically meaningful foreground mattes, either in an unsupervised fashion, or based on a small amount of user input.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Understanding Blind Deconvolution Algorithms

Anat Levin; Yair Weiss; William T. Freeman

Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. We show that, using reasonable image priors, a naive simulations MAP estimation of both latent image and blur kernel is guaranteed to fail even with infinitely large images sampled from the prior. On the other hand, we show that since the kernel size is often smaller than the image size, a MAP estimation of the kernel alone is well constrained and is guaranteed to succeed to recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. As a first step toward this experimental evaluation, we have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrate that the shift-invariant blur assumption made by most algorithms is often violated.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

User Assisted Separation of Reflections from a Single Image Using a Sparsity Prior

Anat Levin; Yair Weiss

When we take a picture through transparent glass, the image we obtain is often a linear superposition of two images: The image of the scene beyond the glass plus the image of the scene reflected by the glass. Decomposing the single input image into two images is a massively ill-posed problem: In the absence of additional knowledge about the scene being viewed, there are an infinite number of valid decompositions. In this paper, we focus on an easier problem: user assisted separation in which the user interactively labels a small number of gradients as belonging to one of the layers. Even given labels on part of the gradients, the problem is still ill-posed and additional prior knowledge is needed. Following recent results on the statistics of natural images, we use a sparsity prior over derivative filters. This sparsity prior is optimized using the iterative reweighted least squares (IRLS) approach. Our results show that using a prior derived from the statistics of natural images gives a far superior performance compared to a Gaussian prior and it enables good separations from a modest number of labeled gradients.


IEEE Transactions on Image Processing | 2006

Seamless image stitching by minimizing false edges

Assaf Zomet; Anat Levin; Shmuel Peleg; Yair Weiss

Various applications such as mosaicing and object insertion require stitching of image parts. The stitching quality is measured visually by the similarity of the stitched image to each of the input images, and by the visibility of the seam between the stitched images. In order to define and get the best possible stitching, we introduce several formal cost functions for the evaluation of the stitching quality. In these cost functions the similarity to the input images and the visibility of the seam are defined in the gradient domain, minimizing the disturbing edges along the seam. A good image stitching will optimize these cost functions, overcoming both photometric inconsistencies and geometric misalignments between the stitched images. We study the cost functions and compare their performance for different scenarios both theoretically and practically. Our approach is demonstrated in various applications including generation of panoramic images, object blending and removal of compression artifacts. Comparisons with existing methods show the benefits of optimizing the measures in the gradient domain.

Collaboration


Dive into the Anat Levin's collaboration.

Top Co-Authors

Avatar

Yair Weiss

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Amnon Shashua

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dani Lischinski

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Assaf Zomet

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Daniel Glasner

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boaz Nadler

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Taeg Sang Cho

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge