Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amit Adam is active.

Publication


Featured researches published by Amit Adam.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Color Invariants for Person Reidentification

Igor Kviatkovsky; Amit Adam; Ehud Rivlin

We revisit the problem of specific object recognition using color distributions. In some applications-such as specific person identification-it is highly likely that the color distributions will be multimodal and hence contain a special structure. Although the color distribution changes under different lighting conditions, some aspects of its structure turn out to be invariants. We refer to this structure as an intradistribution structure, and show that it is invariant under a wide range of imaging conditions while being discriminative enough to be practical. Our signature uses shape context descriptors to represent the intradistribution structure. Assuming the widely used diagonal model, we validate that our signature is invariant under certain illumination changes. Experimentally, we use color information as the only cue to obtain good recognition performance on publicly available databases covering both indoor and outdoor conditions. Combining our approach with the complementary covariance descriptor, we demonstrate results exceeding the state-of-the-art performance on the challenging VIPeR and CAVIAR4REID databases.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

ROR: rejection of outliers by rotations

Amit Adam; Ehud Rivlin; Ilan Shimshoni

We address the problem of rejecting false matches of points between two perspective views. The two views are taken from two arbitrary, unknown positions and orientations. We present an algorithm for identification of the false matches between the views. The algorithm exploits the possibility of rotating one of the images to achieve some common behavior of the correct matches. Those matches that deviate from this common behavior turn out to be false matches. Our algorithm does not, in any way, use the image characteristics of the matched features. In particular, it avoids problems that cause the false matches in the first place. The algorithm works even in cases where the percentage of false matches is as high as 85 percent. The algorithm may be run as a post-processing step on output from any point matching algorithm. Use of the algorithm may significantly improve the ratio of correct matches to incorrect matches. We present the algorithm, identify the conditions under which it works, and present results of the test.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

On Scene Segmentation and Histograms-Based Curve Evolution

Amit Adam; Ron Kimmel; Ehud Rivlin

We consider curve evolution based on comparing distributions of features, and its applications for scene segmentation. In the first part, we promote using cross-bin metrics such as the Earth movers distance (EMD), instead of standard bin-wise metrics as the Bhattacharyya or Kullback-Leibler metrics. To derive flow equations for minimizing functionals involving the EMD, we employ a tractable expression for calculating EMD between one-dimensional distributions. We then apply the derived flows to various examples of single image segmentation, and to scene analysis using video data. In the latter, we consider the problem of segmenting a scene to spatial regions in which different activities occur. We use a nonparametric local representation of the regions by considering multiple one-dimensional histograms of normalized spatiotemporal derivatives. We then obtain semisupervised segmentation of regions using the flows derived in the first part of the paper. Our results are demonstrated on challenging surveillance scenes, and compare favorably with state-of-the-art results using parametric representations by dynamic systems or mixtures of them.


international conference on robotics and automation | 2000

Computing the sensory uncertainty field of a vision-based localization sensor

Amit Adam; Ehud Rivlin; Ilan Shimshoni

It has been recognized that robust motion planners should take into account the varying performance of localization sensors across the configuration space. Although a number of works have shown the benefits of using such a performance map, the work on actual computation of such a performance map has been limited and has addressed mostly range sensors. Since vision is an important sensor for localization, it is important to have performance maps of vision sensors. We present a method for computing the performance map of a vision-based sensor. We compute the map and show that it accurately describes the actual performance of the sensor, both on synthetic and real images. The method we use involves evaluating closed form formulas and hence is very fast. Using the performance map computed by this method for motion planning and for devising sensing strategies will contribute to more robust navigation algorithms.


systems man and cybernetics | 1999

Fusion of fixation and odometry for vehicle navigation

Amit Adam; Ehud Rivlin; Hector Rotstein

This paper deals with the problem of determining the position and orientation of an autonomous guided vehicle (AGV) by fusing odometry with the information provided by a vision system. The main idea is to exploit the ability of pointing a camera in different directions, to fixate on a point of the environment while the AGV is moving. By fixating on a landmark, one can improve the navigation accuracy even if the scene coordinates of the landmark are unknown. This is a major improvement over previous methods which assume that the coordinates of the landmark are known, since any point of the observed scene can be selected as a landmark, and not just pre-measured points. This work argues that fixation is basically a simpler procedure than previously mentioned methods. The simplification comes from the fact that only one point needs to be tracked as opposed to multiple points in other methods. This disposes of the need to be able to identify which of the landmarks is currently being tracked, through a matching algorithm or by other means. We support our findings with both experimental and simulation results.


computer vision and pattern recognition | 2000

ROR: rejection of outliers by rotations in stereo matching

Amit Adam; Ehud Rivlin; Ilan Shimshoni

We address the problem of rejecting false matches of points between two perspective views. Even the best algorithms for image matching make some mistakes and output some false matches. We present an algorithm for identification of the false matches between the views. The algorithm exploits the possibility of rotating one of the images to achieve some common behaviour of the correct matches. Those matches that deviate from this common behaviour turn out to be false matches. The statistical tool we use is the mean shift mode estimator. Our algorithm does not use in any way the image characteristics of the matched features. In particular it avoids problems that cause the false matches in the first place. The algorithm may be run as a post processing step on output from any point matching algorithm. Use of the algorithm may significantly improve the ratio of correct matches to incorrect matches. On real images our algorithm has improved the percentage of correct matches from an initial 20%-30% to a final 70%-80%. For robust estimation algorithms which are later employed, this is a very desirable quality since it reduces significantly their computational cost. We present the algorithm, identify the conditions under which it works, and present results of testing it on both synthetic and real images.


international conference on image processing | 2006

Aggregated Dynamic Background Modeling

Amit Adam; Ehud Rivlin; Ilan Shimshoni

Standard practices in background modeling learn a separate model for every pixel in the image. However, in dynamic scenes the connection between an observation and the place where it was observed is much less important and is usually random. For example, a wave observed in an ocean scene could easily have been observed at another place in the image. Moreover, during a limited learning period, we cannot expect to observe at every pixel all the possible background behaviors. We therefore develop in this paper a background model in which observations are decoupled from the place in the image where they were observed. A single non-parametric model is used to describe the dynamic region of the scene, aggregating the observations from the whole region. Using high-order features, we demonstrate the feasibility of our approach on challenging ocean scenes using only grayscale information.


computer vision and pattern recognition | 2017

Dynamic Time-of-Flight

Michael Schober; Amit Adam; Omer Yair; Shai Mazor; Sebastian Nowozin

Time-of-flight (TOF) depth cameras provide robust depth inference at low power requirements in a wide variety of consumer and industrial applications. These cameras reconstruct a single depth frame from a given set of infrared (IR) frames captured over a very short exposure period. Operating in this mode the camera essentially forgets all information previously captured - and performs depth inference from scratch for every frame. We challenge this practice and propose using previously captured information when inferring depth. An inherent problem we have to address is camera motion over this longer period of collecting observations. We derive a probabilistic framework combining a simple but robust model of camera and object motion, together with an observation model. This combination allows us to integrate information over multiple frames while remaining robust to rapid changes. Operating the camera in this manner has implications in terms of both computational efficiency and how information should be captured. We address these two issues and demonstrate a realtime TOF system with robust temporal integration that improves depth accuracy over strong baseline methods including adaptive spatio-temporal filters.


international conference on pattern recognition | 2000

Using model-based localization with active navigation

Amit Adam; Ehud Rivlin; Ilan Shimshoni

Vision is an important sensor used for mobile robot navigation. One approach to localization which is based on vision is to compute camera egomotion with respect to base images. What characterizes this method of localization is that its performance varies greatly in different positions. Active navigation is an approach to path and sensing planning which is designed to address varying performance of a sensor across the configuration space. We describe how to integrate a vision-based localization sensor with active navigation. We explain the localization process, how its performance varies across the configuration space, and the use of this variation by active navigation.


computer vision and pattern recognition | 2006

Robust Fragments-based Tracking using the Integral Histogram

Amit Adam; Ehud Rivlin; Ilan Shimshoni

Collaboration


Dive into the Amit Adam's collaboration.

Top Co-Authors

Avatar

Ehud Rivlin

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hector Rotstein

Rafael Advanced Defense Systems

View shared research outputs
Top Co-Authors

Avatar

Igor Kviatkovsky

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ron Kimmel

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge