Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benny Rousso is active.

Publication


Featured researches published by Benny Rousso.


International Journal of Computer Vision | 1994

Computing occluding and transparent motions

Michal Irani; Benny Rousso; Shmuel Peleg

Computing the motions of several moving objects in image sequences involves simultaneous motion analysis and segmentation. This task can become complicated when image motion changes significantly between frames, as with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy cannot be assumed. The problem becomes even more difficult in the case of transparent motions.A method is presented for detecting and tracking occluding and transparent moving objects, which uses temporal integration without assuming motion constancy. Each new frame in the sequence is compared to a dynamic internal representation image of the tracked object. The internal representation image is constructed by temporally integrating frames after registration based on the motion computation. The temporal integration maintains sharpness of the tracked object, while blurring objects that have other motions. Comparing new frames to the internal representation image causes the motion analysis algorithm to continue tracking the same object in subsequent frames, and to improve the segmentation.


european conference on computer vision | 1992

Detecting and Tracking Multiple Moving Objects Using Temporal Integration

Michal Irani; Benny Rousso; Shmuel Peleg

Tracking multiple moving objects in image sequences involves a combination of motion detection and segmentation. This task can become complicated as image motion may change significantly between frames, like with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy can not be assumed.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Mosaicing on adaptive manifolds

Shmuel Peleg; Benny Rousso; Alex Rav-Acha; Assaf Zomet

Image mosaicing is commonly used to increase the visual field of view by pasting together many images or video frames. Existing mosaicing methods are based on projecting all images onto a predetermined single manifold: A plane is commonly used for a camera translating sideways, a cylinder is used for a panning camera, and a sphere is used for a camera which is both panning and tilting. While different mosaicing methods should therefore be used for different types of camera motion, more general types of camera motion, such as forward motion, are practically impossible for traditional mosaicing. A new methodology to allow image mosaicing in more general cases of camera motion is presented. Mosaicing is performed by projecting thin strips from the images onto manifolds which are adapted to the camera motion. While the limitations of existing mosaicing techniques are a result of using predetermined manifolds, the use of more general manifolds overcomes these limitations.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Recovery of ego-motion using region alignment

Michal Irani; Benny Rousso; Shmuel Peleg

A method for computing the 3D camera motion (the ego-motion) in a static scene is described, where initially a detected 2D motion between two frames is used to align corresponding image regions. We prove that such a 2D registration removes all effects of camera rotation, even for those image regions that remain misaligned. The resulting residual parallax displacement field between the two region-aligned images is an epipolar field centered at the FOE (Focus-of-Expansion). The 3D camera translation is recovered from the epipolar field. The 3D camera rotation is recovered from the computed 3D translation and the detected 2D motion. The decomposition of image motion into a 2D parametric motion and residual epipolar parallax displacements avoids many of the inherent ambiguities and instabilities associated with decomposing the image motion into its rotational and translational components, and hence makes the computation of ego-motion or 3D structure estimation more robust.


computer vision and pattern recognition | 1996

Robust recovery of camera rotation from three frames

Benny Rousso; Shai Avidan; Amnon Shashua; Shmuel Peleg

Computing camera rotation from image sequences can be used for image stabilization, and when the camera rotation is known the computation of translation and scene structure are much simplified as well. A robust approach for recovering camera rotation is presented, which does not assume any specific scene structure (e.g. no planar surface is required), and which avoids prior computation of the epipole. Given two images taken from two different viewing positions, the rotation matrix between the images can be computed from any three homography matrices. The homographies are computed using the trilinear tensor which describes the relations between the projections of a 3D point into three images. The entire computation is linear for small angles, and is therefore fast and stable. Iterating the linear computation can then be used to recover larger rotations as well.


computer analysis of images and patterns | 1993

Robust Recovery of Ego-Motion

Michal Irani; Benny Rousso; Shmuel Peleg

A robust method is introduced for computing the camera motion (the ego-motion) in a static scene. The method is based on detecting a single planar surface in the scene directly from image intensities, and computing its 2D motion in the image plane. The detected 2D motion of the planar surface is used to register the images, so that the planar surface appears stationary. The resulting displacement field for the entire scene in such registered frames is affected only by the 3D translation of the camera, which is computed by finding the focus-of-expansion in the registered frames. This step is followed by computing the 3D rotation to complete the computation of the ego-motion.


computer vision and pattern recognition | 1998

Varying focal length self-calibration and pose estimation from two images

Benny Rousso; Erez Shilat

This paper presents a self-calibration and pose estimation method that uses two cameras which only differ by focal length. The estimation of the rotation and focal lengths is independent of the translation recovery. Unlike most methods, we do not initialize our recovery with the projective camera. Instead we estimate the ego-motion and calibration from 3 homographies. These homographies can be easily obtained from a fundamental matrix or a trifocal tensor.


Panoramic vision | 2001

Mosaicing with strips on adaptive manifolds

Shmuel Peleg; Benny Rousso; Alex Rav-Acha; Assaf Zomet

Creating pictures having larger field of view, by combining many smaller images, is common since the beginning of photography, as the camera’s field of view is smaller than the human field of view. In addition, some large objects can not be captured in a single picture as is the case in aerial photography. Using omnidirectional cameras [195] can sometimes provide a partial solution, but the images obtained with such cameras have substantial distortions, and capturing a wide field of view with the limited resolution of a video camera compromises image resolution. A common solution is photo-mosaicing: aligning and pasting pictures, or frames in a video sequence, to create a wider view. Digital photography enabled new implementations for mosaicing [184, 185, 212, 38, 122, 273], which were first applied to aerial and satellite images, and later used for scene and object representation.


Archive | 1998

Generalized panoramic mosaic

Shmuel Peleg; Benny Rousso


international conference on computer vision | 1998

Universal mosaicing using pipe projection

Benny Rousso; Shmuel Peleg; Ilan Finci; Alex Rav-Acha

Collaboration


Dive into the Benny Rousso's collaboration.

Top Co-Authors

Avatar

Shmuel Peleg

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Michal Irani

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Alex Rav-Acha

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Ilan Finci

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Assaf Zomet

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Amnon Shashua

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Moshe Ben-Ezra

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge