Yoav Y. Schechner
Technion – Israel Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yoav Y. Schechner.
computer vision and pattern recognition | 2001
Yoav Y. Schechner; Srinivasa G. Narasimhan; Shree K. Nayar
We present an approach to easily remove the effects of haze from images. It is based on the fact that usually airlight scattered by atmospheric particles is partially polarized. Polarization filtering alone cannot remove the haze effects, except in restricted situations. Our method, however, works under a wide range of atmospheric and viewing conditions. We analyze the image formation process, taking into account polarization effects of atmospheric scattering. We then invert the process to enable the removal of haze from images. The method can be used with as few as two images taken through a polarizer at different orientations. This method works instantly, without relying on changes of weather conditions. We present experimental results of complete dehazing in far from ideal conditions for polarization filtering. We obtain a great improvement of scene contrast and correction of color. As a by product, the method also yields a range (depth) map of the scene, and information about properties of the atmospheric particles.
computer vision and pattern recognition | 2006
Sarit Shwartz; Einav Namer; Yoav Y. Schechner
Outdoor imaging is plagued by poor visibility conditions due to atmospheric scattering, particularly in haze. A major problem is spatially-varying reduction of contrast by stray radiance (airlight), which is scattered by the haze particles towards the camera. Recent computer vision methods have shown that images can be compensated for haze, and even yield a depth map of the scene. A key step in such a scene recovery is subtraction of the airlight. In particular, this can be achieved by analyzing polarization-filtered images. However, the recovery requires parameters of the airlight. These parameters were estimated in past studies by measuring pixels in sky areas. This paper derives an approach for blindly recovering the parameter needed for separating the airlight from the measurements, thus recovering contrast, with neither user interaction nor existence of the sky in the frame. This eases the interaction and conditions needed for image dehazing, which also requires compensation for attenuation. The approach has proved successful in experiments, some of which are shown here.
IEEE Journal of Oceanic Engineering | 2005
Yoav Y. Schechner; Nir Karpel
Underwater imaging is important for scientific research and technology as well as for popular activities, yet it is plagued by poor visibility conditions. In this paper, we present a computer vision approach that removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. It is shown that the main degradation effects can be associated with partial polarization of light. Then, an algorithm is presented, which inverts the image formation process for recovering good visibility in images of scenes. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by-product, a distance map of the scene is also derived. In addition, this paper analyzes the noise sensitivity of the recovery. We successfully demonstrated our approach in experiments conducted in the sea. Great improvements of scene contrast and color correction were obtained, nearly doubling the underwater visibility range.
computer vision and pattern recognition | 2004
Yoav Y. Schechner; Nir Karpel
Underwater imaging is important for scientific research and technology, as well as for popular activities. We present a computer vision approach which easily removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. We show that the main degradation effects can be associated with partial polarization of light. We therefore present an algorithm which inverts the image formation process, to recover a good visibility image of the object. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by product, a distance map of the scene is derived as well. We successfully used our approach when experimenting in the sea using a system we built. We obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range.
international conference on pattern recognition | 1998
Yoav Y. Schechner; Nahum Kiryati
Depth from Focus (DFF) and Depth from Defocus (DFD) methods are theoretically unified with the geometric triangulation principle. Fundamentally, the depth sensitivities of DFF and DFD are not different than those of stereo (or motion) based systems having the same physical dimensions. Contrary to common belief, DFD does not inherently avoid the matching (correspondence) problem. Basically, DFD and DFF do not avoid the occlusion problem any more than triangulation techniques, but they are more stable in the presence of such disruptions. The fundamental advantage of DFF and DFD methods is the two-dimensionality of the aperture, allowing more robust estimation. We analyze the effect of noise in different spatial frequencies, and derive the optimal changes of the focus settings in DFD. These results elucidate the limitations of methods based on depth of field and provide a foundation for fair performance comparison between DFF/DFD and shape from stereo (or motion) algorithms.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009
Tali Treibitz; Yoav Y. Schechner
Vision in scattering media is important but challenging. Images suffer from poor visibility due to backscattering and attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome, while natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are taken, with different states of the analyzer or polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and illumination falloff. Thus, the limits and noise sensitivity are analyzed. We demonstrate the approach in underwater field experiments.
Optics Letters | 2006
Adam Greengard; Yoav Y. Schechner; Rafael Piestun
The accuracy of depth estimation based on defocus effects has been essentially limited by the depth of field of the imaging system. We show that depth estimation can be improved significantly relative to classical methods by exploiting three-dimensional diffraction effects. We formulate the problem by using information theory analysis and present, to the best of our knowledge, a new paradigm for depth estimation based on spatially rotating point-spread functions (PSFs). Such PSFs are fundamentally more sensitive to defocus thanks to their first-order axial variation. Our system acquires a frame by using a rotating PSF and jointly processes it with an image acquired by using a standard PSF to recover depth information. Analytical, numerical, and experimental evidence suggest that the approach is suitable for applications such as microscopy and machine vision.
Marine Technology Society Journal | 2008
Donna M. Kocak; Fraser R. Dalgleish; Frank M. Caimi; Yoav Y. Schechner
Underwater optical imaging advances from 2005 to the present are reviewed. Research and technical innovations are synopsized and organized much as the previous report (Kocak and Caimi, 2005) was. Examples of several recent novel system applications are given, as are brief summaries of emerging underwater imaging research and development trends.
International Journal of Computer Vision | 2000
Yoav Y. Schechner; Nahum Kiryati; Ronen Basri
Consider situations where the depth at each point in the scene is multi-valued, due to the presence of a virtual image semi-reflected by a transparent surface. The semi-reflected image is linearly superimposed on the image of an object that is behind the transparent surface. A novel approach is proposed for the separation of the superimposed layers. Focusing on either of the layers yields initial separation, but crosstalk remains. The separation is enhanced by mutual blurring of the perturbing components in the images. However, this blurring requires the estimation of the defocus blur kernels. We thus propose a method for self calibration of the blur kernels, given the raw images. The kernels are sought to minimize the mutual information of the recovered layers. Autofocusing and depth estimation in the presence of semi-reflections are also considered. Experimental results are presented.
computer vision and pattern recognition | 2005
Einat Kidron; Yoav Y. Schechner; Michael Elad
People and animals fuse auditory and visual information to obtain robust perception. A particular benefit of such cross-modal analysis is the ability to localize visual events associated with sound sources. We aim to achieve this using computer-vision aided by a single microphone. Past efforts encountered problems stemming from the huge gap between the dimensions involved and the available data. This has led to solutions suffering from low spatio-temporal resolutions. We present a rigorous analysis of the fundamental problems associated with this task. Then, we present a stable and robust algorithm which overcomes past deficiencies. It grasps dynamic audio-visual events with high spatial resolution, and derives a unique solution. The algorithm effectively detects pixels that are associated with the sound, while filtering out other dynamic pixels. It is based on canonical correlation analysis (CCA), where we remove inherent ill-posedness by exploiting the typical spatial sparsity of audio-visual events. The algorithm is simple and efficient thanks to its reliance on linear programming and is free of user-defined parameters. To quantitatively assess the performance, we devise a localization criterion. The algorithm capabilities were demonstrated in experiments, where it overcame substantial visual distractions and audio noise.