Todor G. Georgiev
Adobe Systems
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Todor G. Georgiev.
international conference on computational photography | 2009
Andrew Lumsdaine; Todor G. Georgiev
Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views
Proceedings of SPIE | 2011
Todor G. Georgiev; Andrew Lumsdaine
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Journal of Electronic Imaging | 2010
Todor G. Georgiev; Andrew Lumsdaine
Plenoptic cameras, constructed with internal microlens arrays, capture both spatial and angular information, i.e., the full 4-D radiance, of a scene. The design of traditional plenoptic cameras assumes that each microlens image is completely defocused with respect to the image created by the main camera lens. As a result, only a single pixel in the final image is rendered from each microlens image, resulting in disappointingly low resolution. A recently devel- oped alternative approach based on the focused plenoptic camera uses the microlens array as an imaging system focused on the im- age plane of the main camera lens. The flexible spatioangular trade- off that becomes available with this design enables rendering of final images with significantly higher resolution than those from traditional plenoptic cameras. We analyze the focused plenoptic camera in optical phase space and present basic, blended, and depth-based rendering algorithms for producing high-quality, high-resolution im- ages. We also present our graphics-processing-unit-based imple- mentations of these algorithms, which are able to render full screen refocused images in real time.
Computer Graphics Forum | 2010
Todor G. Georgiev; Andrew Lumsdaine
The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio‐angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.
european conference on computer vision | 2006
Todor G. Georgiev
We describe a new theoretical approach to Image Processing and Vision. Expressed in mathemetical terminology, in our formalism image space is a fibre bundle, and the image itself is the graph of a section on it. This mathematical model has advantages to the conventional view of the image as a function on the plane: Based on the new method we are able to do image processing of the image as viewed by the human visual system, which includes adaptation and perceptual correctness of the results. Our formalism is invariant to relighting and handles seamlessly illumination change. It also explains simultaneous contrast visual illusions, which are intrinsically related to the new covariant approach. Examples include Poisson image editing, Inpainting, gradient domain HDR compression, and others.
Studies in Regional Science | 2009
Todor G. Georgiev; Andrew Lumsdaine; Sergio Goma
We demonstrate high dynamic range (HDR) imaging with the Plenoptic 2.0 camera. Multiple exposure capture is achieved with a single shot using microimages created by microlens array that has an interleaved set of different apertures.
Proceedings of SPIE | 2012
Todor G. Georgiev; Andrew Lumsdaine
The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.
international conference on computational photography | 2010
Todor G. Georgiev; Andrew Lumsdaine
The plenoptic function was originally defined as a record of both the 3D structure of the lightfield and of its dependence on parameters such as wavelength, polarization, etc. Still, most work on these ideas has emphasized the 3D aspect of lightfield capture and manipulation, with less attention paid to other parameters. In this paper, we leverage the high resolution and flexible sampling trade-offs of the focused plenoptic camera to perform high-resolution capture of the rich “non 3D” structure of the plenoptic function. Two different techniques are presented and analyzed, using extended dynamic range photography as a particular example. The first technique simultaneously captures multiple exposures with a microlens array that has an interleaved set of different filters. The second technique places multiple filters at the main lens aperture. Experimental results validate our approach, producing 1.3Mpixel HDR images with a single capture.
human vision and electronic imaging conference | 2005
Todor G. Georgiev
The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first principles of human vision, related to adaptation transforms of von Kries type and Retinex theory. In this paper we present the new result of Healing in arbitrary color space. In addition to supporting image repair and seamless cloning, our approach also produces the exact solution to the problem of high dynamic range compression of17 and can be applied to other image processing algorithms.
eurographics | 2005
Todor G. Georgiev
This paper describes an improvement to the Poisson image editing method for seamless cloning. Our approach is based on minimizing an energy expression invariant to relighting. The improved method reconstructs seamlessly the selected region, matching both pixel values and texture contrast of the surrounding area, while previous algorithms matched pixel values only. Our algorithm solves a deeper problem: It performs reconstruction in terms of the internal working mechanisms of human visual system. Retinex-type effects of adaptation are built into the structure of the mathematical model, producing results that change covariantly with lighting.