Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tali Treibitz is active.

Publication


Featured researches published by Tali Treibitz.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Active Polarization Descattering

Tali Treibitz; Yoav Y. Schechner

Vision in scattering media is important but challenging. Images suffer from poor visibility due to backscattering and attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome, while natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are taken, with different states of the analyzer or polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and illumination falloff. Thus, the limits and noise sensitivity are analyzed. We demonstrate the approach in underwater field experiments.


computer vision and pattern recognition | 2008

Flat refractive geometry

Tali Treibitz; Yoav Y. Schechner; Hanumant Singh

While the study of geometry has mainly concentrated on single-viewpoint (SVP) cameras, there is growing attention to more general non-SVP systems. Here we study an important class of systems that inherently have a non-SVP: a perspective camera imaging through an interface into a medium. Such systems are ubiquitous: they are common when looking into water-based environments. The paper analyzes the common flat-interface class of systems. It characterizes the locus of the viewpoints (caustic) of this class, and proves that the SVP model is invalid in it. This may explain geometrical errors encountered in prior studies. Our physics-based model is parameterized by the distance of the lens from the medium interface, beside the focal length. The physical parameters are calibrated by a simple approach that can be based on a single-frame. This directly determines the system geometry. The calibration is then used to compensate for modeled system distortion. Based on this model, geometrical measurements of objects are significantly more accurate, than if based on an SVP model. This is demonstrated in real-world experiments.


computer vision and pattern recognition | 2016

Non-local Image Dehazing

Dana Berman; Tali Treibitz; Shai Avidan

Haze limits visibility and reduces image contrast in outdoor images. The degradation is different for every pixel and depends on the distance of the scene point from the camera. This dependency is expressed in the transmission coefficients, that control the scene attenuation and amount of haze in every pixel. Previous methods solve the single image dehazing problem using various patch-based priors. We, on the other hand, propose an algorithm based on a new, non-local prior. The algorithm relies on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors, that form tight clusters in RGB space. Our key observation is that pixels in a given cluster are often non-local, i.e., they are spread over the entire image plane and are located at different distances from the camera. In the presence of haze these varying distances translate to different transmission coefficients. Therefore, each color cluster in the clear image becomes a line in RGB space, that we term a haze-line. Using these haze-lines, our algorithm recovers both the distance map and the haze-free image. The algorithm is linear in the size of the image, deterministic and requires no training. It performs well on a wide variety of images and is competitive with other stateof-the-art methods.


international conference on computer vision | 2011

Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgänger list comparison

Florian Schroff; Tali Treibitz; David J. Kriegman; Serge J. Belongie

Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. The goal of this work is to develop a face-similarity measure that is largely invariant to these differences. We propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures. Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).


Journal of The Optical Society of America A-optics Image Science and Vision | 2014

Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

Derya Akkaynak; Tali Treibitz; Bei Xiao; Umut A. Gurkan; Justine J. Allen; Utkan Demirci; Roger T. Hanlon

Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging.


computer vision and pattern recognition | 2006

Instant 3Descatter

Tali Treibitz; Yoav Y. Schechner

Imaging in scattering media such as fog and water is important but challenging. Images suffer from poor visibility due to backscattering and signal attenuation. Most prior methods for visibility improvement use active illumination scanners (structured and gated), which are slow and cumbersome. On the other hand, natural illumination is inapplicable to dark environments. The current paper counters these deficiencies. We study the formation of images under wide field (non-scanning) artificial illumination. We discovered some characteristics of backscattered light empirically. Based on these, the paper presents a visibility recovery approach which also yields a rough estimate of the 3D scene structure. The method is simple and requires compact hardware, using active wide field polarized illumination. Two images of the scene are instantly taken, with different states of a camera-mounted polarizer. A recovery algorithm then follows. We demonstrate the approach in underwater field experiments.


Coral Reefs | 2016

Theme section on mesophotic coral ecosystems: advances in knowledge and future perspectives

Yossi Loya; Gal Eyal; Tali Treibitz; Michael P. Lesser; Richard S. Appeldoorn

Abstract The Second International Mesophotic Coral Ecosystems (MCEs) workshop was held in Eilat, Israel, October 26–31, 2014. Here we provide an account of: (1) advances in our knowledge of MCE ecology, including the central question of the potential vertical connectivity between MCEs and shallow-water reefs (SWRs), and that of the validity of the deep-reef refugia hypothesis (DRRH); (2) the contribution of the 2014 MCE workshop to the central question presented in (1), as well as its contribution to novel MCE studies on corals, sponges, fish, and crabs; and (3) gaps, priorities, and recommendations for future research stemming from the workshop. Despite their close proximity to well-studied SWRs, and the growing evidence of their importance, our scientific knowledge of MCEs is still in its infancy. During the last five years, we have witnessed an ever-increasing scientific interest in MCEs, expressed in the exponential increase in the number of publications studying this unique environment. The emerging consensus is that lower MCE benthic assemblages represent unique communities, either of separate species or genetically distinct individuals within species, and any significant support for the DRRH will be limited to upper MCEs. Determining the health and stability of MCEs, their biodiversity, and the degree of genetic connectivity among SWRs and MCEs, will ultimately indicate the ability of MCEs to contribute to the resilience of SWRs and help to guide future management and conservation strategies. MCEs deserve therefore management consideration in their own right. With the technological advancements taking place in recent years that facilitate access to MCEs, the prospects for exciting and innovative discoveries resulting from MCE research, spanning a wide variety of fields, are immense.


computer vision and pattern recognition | 2009

Polarization: Beneficial for visibility enhancement?

Tali Treibitz; Yoav Y. Schechner

When imaging in scattering media there is a need to enhance visibility. Some approaches have used polarized images in this context with apparent success. These methods take advantage of the fact that the path radiance (air light) is partially polarized. However, mounting a polarizer attenuates the signal associated with the object. This attenuation degrades the image quality. Thus, a question arises: is the use of a polarizer worth the mentioned loss? The ability to see objects is limited by noise. Therefore, in this work we analyze the change in signal to noise ratio (SNR) following the use of a polarizer or a dehazing process. Typically, methods use either one polarized image (with minimum path radiance) or two polarized images corresponding to extrema of the path radiance. We show that if the only goal is signal discrimination over noise (and not color or radiance recovery) in haze, the use of polarization in both approaches is unnecessary: polarization rarely improves the SNR over an average of unpolarized images acquired under the same acquisition time. Nevertheless, under a single frame constraint, the use of a single polarized image is beneficial.


PLOS ONE | 2015

Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

Oscar Beijbom; Peter J. Edmunds; Chris Roelfsema; Jennifer E. Smith; David I. Kline; Benjamin P. Neal; Matthew J. Dunlap; Vincent W. Moriarty; Tung-Yung Fan; Chih-Jui Tan; Stephen Chan; Tali Treibitz; Anthony Gamst; B. Greg Mitchell; David J. Kriegman

Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.


IEEE Transactions on Image Processing | 2012

Turbid Scene Enhancement Using Multi-Directional Illumination Fusion

Tali Treibitz; Yoav Y. Schechner

Ambient light is strongly attenuated in turbid media. Moreover, natural light is often more highly attenuated in some spectral bands, relative to others. Hence, imaging in turbid media often relies heavily on artificial sources for illumination. Scenes irradiated by an off-axis single point source have enhanced local object shadow edges, which may increase object visibility. However, the images may suffer from severe nonuniformity, regions of low signal (being distant from the source), and regions of strong backscatter. On the other hand, simultaneously illuminating the scene from multiple directions increases the backscatter and fills-in shadows, both of which degrade local contrast. Some previous methods tackle backscatter by scanning the scene, either temporally or spatially, requiring a large number of frames. We suggest using a few frames, in each of which wide field scene irradiance originates from a different direction. This way, shadow contrast can be maintained and backscatter can be minimized in each frame, while the sequence at large has a wider, more spatially uniform illumination. The frames are then fused by post processing to a single, clearer image. We demonstrate significant visibility enhancement underwater using as little as two frames.

Collaboration


Dive into the Tali Treibitz's collaboration.

Top Co-Authors

Avatar

Yoav Y. Schechner

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David I. Kline

University of California

View shared research outputs
Top Co-Authors

Avatar

Oscar Beijbom

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge