Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moshe Ben-Ezra is active.

Publication


Featured researches published by Moshe Ben-Ezra.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Omnistereo: panoramic stereo imaging

Shmuel Peleg; Moshe Ben-Ezra; Yael Pritch

An omnistereo panorama consists of a pair of panoramic images, where one panorama is for the left eye and another panorama is for the right eye. The panoramic stereo pair provides a stereo sensation up to a full 360 degrees. Omnistereo panoramas can be constructed by mosaicing images from a single rotating camera. This approach also enables the control of stereo disparity, giving larger baselines for faraway scenes, and a smaller baseline for closer scenes. Capturing panoramic omnistereo images with a rotating camera makes it impossible to capture dynamic scenes at video rates and limits omnistereo imaging to stationary scenes. We present two possibilities for capturing omnistereo panoramas using optics without any moving parts. A special mirror is introduced such that viewing the scene through this mirror creates the same rays as those used with the rotating cameras. The lens used for omnistereo panorama is also introduced, together with the design of the mirror. Omnistereo panoramas can also be rendered by computer graphics methods to represent virtual environments.


computer vision and pattern recognition | 1999

Stereo panorama with a single camera

Shmuel Peleg; Moshe Ben-Ezra

Full panoramic images, covering 360 degrees, can be created either by using panoramic cameras or by mosaicing together many regular images. Creating panoramic views in stereo, where one panorama is generated for the left eye, and another panorama is generated for the right eye is more problematic. Earlier attempts to mosaic images from a rotating pair of stereo cameras faced severe problems of parallax and of scale changes. A new family of multiple viewpoint image projections, the Circular Projections, is developed. Two panoramic images taken using such projections can serve as a panoramic stereo pair. A system is described to generates a stereo panoramic image using circular projections from images or video taken by a single rotating camera. The system works in real-time on a PC. It should be noted that the stereo images are created without computation of 3D structure, and the depth effects are created only in the viewers brain.


computer vision and pattern recognition | 2000

Cameras for stereo panoramic imaging

Shmuel Peleg; Yael Pritch; Moshe Ben-Ezra

A panorama for visual stereo consists of a pair of panoramic images, where one panorama is for the left eye, and another panorama is for the right eye. A panoramic stereo pair provides a stereo sensation lip to a full 360 degrees. A stereo panorama cannot be photographed by two omnidirectional cameras from two viewpoints. It is normally constructed by mosaicing together images from a rotating stereo pair, or from a single moving camera. Capturing stereo panoramic images by a rotating camera makes it impossible to capture dynamic scenes at video rates, and limits stereo panoramic imaging to stationary scenes. This paper presents two possibilities for capturing stereo panoramic images using optics, without any moving parts. A special mirror is introduced such that viewing the scene through this mirror creates the same rays as those used with the rotating cameras. Such a mirror enables the capture of stereo panoramic movies with a regular video camera. A lens for stereo panorama is also introduced. The designs of the mirror and of the lens are based on curves whose caustic is a circle.


computer vision and pattern recognition | 2000

Segmentation with invisible keying signal

Moshe Ben-Ezra

Chroma keying is the process of segmenting objects from images and video using color cues. A blue (or green) screen placed behind an object during recording is used in special effects and in virtual studios. The blue color is later replaced by a different background. A new method for automatic keying using invisible signal is presented. The advantages of the new approach over conventional chroma keying include: (i) Unlimited color range for foreground objects. (ii) No foreground contamination by background color. (iii) Better performance in non uniform illumination. (iv) Features for generating refraction and reflection of dynamic objects. The method can be used in real-time and no user assistance is required. New design of Catadioptric camera and a single chip sensor for keying is also presented.


Computer Vision and Image Understanding | 2000

Real-Time Motion Analysis with Linear Programming

Moshe Ben-Ezra; Shmuel Peleg; Michael Werman

A method to compute motion models in real time from point-to-line correspondences using linear programming is presented. Point-to-line correspondences are the most reliable measurements for image motion given the aperture effect, and it is shown how they can approximate other motion measurements as well. An error measure for image alignment using the L1 metric and based on point-to-line correspondences achieves results which are more robust than those for the commonly used L2 metric. The L1 error measure is minimized using linear programming. While estimators based on L1 are not robust in the breakdown point sense, experiments show that the proposed method is robust enough to allow accurate motion recovery over hundreds of consecutive frames. The L1 solution is compared to standard M-estimators and Least Median of Squares (LMedS) and it is shown that the L1 metric provides a reasonable and efficient compromise for various scenarios. The entire computation is performed in real-time on a PC without special hardware.


Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704) | 2000

Automatic disparity control in stereo panoramas (OmniStereo)

Yael Pritch; Moshe Ben-Ezra; Shmuel Peleg

An omnistereo panorama consists of a pair of panoramic images, where one panorama is for the left eye, and another panorama is for the right eye. An omnistereo pair provides a stereo sensation up to a full 360 degrees. OmniStereo panoramas can be created by mosaicing images from a rotating video camera, or by specially designed cameras. The stereo sensation is a function of the disparity between the left and right images. This disparity is a function of the ratio of the distance between the cameras (the baseline) and the distance to the object: disparity is larger with longer baseline and close objects. Since our eyes are a fixed distance apart, we loose stereo sensation for far away objects. It is possible to control the disparity in omnistereo panoramas which are generated by mosaicing images from a rotating camera. The baseline can be made larger for far away scenes, and smaller for nearer scenes. A method is described for the construction of omnistereo panoramas having larger baselines for far away scenes, and smaller baseline for closer scenes. The baseline can change within the panorama from directions with closer objects to directions with further objects.


international conference on computer vision | 1999

Real-time motion analysis with linear-programming

Moshe Ben-Ezra; Shmuel Peleg; Michael Werman

A method to compute motion models in real time from point-to-line correspondences using linear programming is presented. Point-to-line correspondences are the most reliable motion measurements given the aperture effect, and it is shown how they can approximate other motion measurements as well. Using an L/sub 1/ error measure for image alignment based on point-to-line correspondences and minimizing this measure using linear programming, achieves results which are more robust than the commonly used L/sub 2/ metric. While estimators based on L/sub 1/ are not theoretically robust, experiments show that the proposed method is robust enough to allow accurate motion recovery in hundreds of consecutive frames. The entire computation is performed in real-time on a PC with no special hardware.


workshop on applications of computer vision | 1998

Efficient computation of the most probable motion from fuzzy correspondences

Moshe Ben-Ezra; Shmuel Peleg; Michael Werman

An algorithm is presented for finding the most probable image motion between two images from fuzzy point correspondences. In fuzzy correspondence a point in one image is assigned to a region in the other image. Such a region can be line (aperture effect) or a convex polygon. Noise and outliers are always present, and points may belong to different motions. The presented algorithm, which uses linear programming, recovers the motion parameters and performs outlier rejection and motion-segmentation at the same time. The linear program computes the global optimum without a need for initial guess.


Archive | 2001

Optics for Omnistereo Imaging

Yael Pritch; Moshe Ben-Ezra; Shmuel Peleg

Omnistereo Panoramas use a new scene-to-image projection, the circular projection, that enables stereo in a full 360° panoramic view. Circular projections are necessary since it is impossible to create two stereo panoramic images using perspective projections.


european conference on computer vision | 2000

Model Based Pose Estimator Using Linear-Programming

Moshe Ben-Ezra; Shmuel Peleg; Michael Werman

Given a 3D object and some measurements for points in this object, it is desired to find the 3D location of the object. A new model based pose estimator from stereo pairs based on linear programming (LP) is presented. In the presence of outliers, the new LP estimator provides better results than maximum likelihood estimators such as weighted least squares, and is usually almost as good as robust estimators such as least-median-of-squares (LMEDS). In the presence of noise the new LP estimator provides better results than robust estimators such as LMEDS, and is slightly inferior to maximum likelihood estimators such as weighted least squares. In the presence of noise and outliers - especially for wide angle stereo - the new estimator provides the best results. The LP estimator is based on correspondence of a points to convex polyhedrons. Each points corresponds to a unique polyhedron, which represents its uncertainty in 3D as computed from the stereo pair. Polyhedron can also be computed for 2D data point by using a-priori depth boundaries. The LP estimator is a single phase (no separate outlier rejection phase) estimator solved by single iteration (no re-weighting), and always converges to the global minimum of its error function. The estimator can be extended to include random sampling and re-weighting within the standard frame work of a linear program.

Collaboration


Dive into the Moshe Ben-Ezra's collaboration.

Top Co-Authors

Avatar

Shmuel Peleg

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Yael Pritch

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Michael Werman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Robert S Rosenschein

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Benny Rousso

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Yaneer Bar-Yam

New England Complex Systems Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge