Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gideon Stein is active.

Publication


Featured researches published by Gideon Stein.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Monitoring activities from multiple video streams: establishing a common coordinate frame

Lily Lee; Raquel A. Romano; Gideon Stein

Monitoring of large sites requires coordination between multiple cameras, which in turn requires methods for relating events between distributed cameras. This paper tackles the problem of automatic external calibration of multiple cameras in an extended scene, that is, full recovery of their 3D relative positions and orientations. Because the cameras are placed far apart, brightness or proximity constraints cannot be used to match static features, so we instead apply planar geometric constraints to moving objects tracked throughout the scene. By robustly matching and fitting tracked objects to a planar model, we align the scenes ground plane across multiple views and decompose the planar alignment matrix to recover the 3D relative camera and ground plane positions. We demonstrate this technique in both a controlled lab setting where we test the effects of errors in the intrinsic camera parameters, and in an uncontrolled, outdoor setting. In the latter, we do not assume synchronized cameras and we show that enforcing geometric constraints enables us to align the tracking data in time. In spite of noise in the intrinsic camera parameters and in the image data, the system successfully transforms multiple views of the scenes ground plane to an overhead view and recovers the relative 3D camera and ground plane positions.


computer vision and pattern recognition | 1997

Lens distortion calibration using point correspondences

Gideon Stein

This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal pinhole or projective camera model. Given multiple views of a set of corresponding points taken by ideal pinhole cameras there exist epipolar and trilinear constraints among pairs and triplets of these views. In practice, due to noise in the feature detection and due to lens distortion these constraints do not hold exactly and we get some error. The calibration is a search for the lens distortion parameters that minimize this error. Using simulation and experimental results with real images we explore the properties of this method. We describe the use of this method with the standard lens distortion model, radial and decentering, but it could also be used with any other parametric distortion models. Finally we demonstrate that lens distortion calibration improves the accuracy of 3D reconstruction.


ieee intelligent vehicles symposium | 2004

Forward collision warning with a single camera

Erez Dagan; Ofer Mano; Gideon Stein; Amnon Shashua

The large number of rear end collisions due to driver inattention has been identified as a major automotive safety issue. Even a short advance warning can significantly reduce the number and severity of the collisions. This paper describes a vision based forward collision warning (FCW) system for highway safety. The algorithm described in this paper computes time to contact (TTC) and possible collision course directly from the size and position of the vehicles in the image - which are the natural measurements for a vision based system - without having to compute a 3D representation of the scene. The use of a single low cost image sensor results in an affordable system which is simple to install. The system has been implemented on real-time hardware and has been test driven on highways. Collision avoidance tests have also been performed on test tracks.


computer vision and pattern recognition | 1999

Tracking from multiple view points: Self-calibration of space and time

Gideon Stein

This paper tackles the problem of self-calibration of multiple cameras which are very far apart. Given a set of feature correspondences one can determine the camera geometry. The key problem we address is finding such correspondences. Since the camera geometry (location and orientation) and photometric characteristics vary considerably between images one cannot use brightness and/or proximity constraints. Instead we propose a three step approach: first we use moving objects in the scene to determine a rough planar alignment, next we use static features to improve the alignment, finally we compute the epipolar geometry from the the homography matrix of the planar alignment. We do not assume synchronized cameras and we show that enforcing geometric constraints enables us to align the tracking data in time. We present results on challenging outdoor scenes using real time tracking data.


international conference on computer vision | 1995

Accurate internal camera calibration using rotation, with analysis of sources of error

Gideon Stein

Describes a simple and accurate method for internal camera calibration based on tracking image features through a sequence of images while the camera undergoes pure rotation. A special calibration object is not required and the method can therefore be used both for laboratory calibration and for self calibration in autonomous robots. Experimental results with real images show that focal length and aspect ratio can be found to within 0.15 percent, and lens distortion error can be reduced to a fraction of a pixel. The location of the principal point and the location of the center of radial distortion can each be found to within a few pixels. We perform a simple analysis to show to what extent the various technical details affect the accuracy of the results. We show that having pure rotation is important if the features are derived from objects close to the camera. In the basic method accurate angle measurement is important. The need to accurately measure the angles can be eliminated by rotating the camera through a complete circle while taking an overlapping sequence of images and using the constraint that the sum of the angles must equal 960 degrees.<<ETX>>


intelligent vehicles symposium | 2003

Vision-based ACC with a single camera: bounds on range and range rate accuracy

Gideon Stein; Ofer Mano; Amnon Shashua

This paper describes a vision-based adaptive cruise control (ACC) system which uses a single camera as input. In particular, we discuss how to compute the range and range-rate from a single camera and discuss how the imaging geometry affects the range and range rate accuracy. We determine the bound on the accuracy given a particular configuration. These bounds in turn determine what steps must be made to achieve good performance. The system has been implemented on a test vehicle and driven on various highways over thousands of miles.


ieee intelligent vehicles symposium | 2000

A robust method for computing vehicle ego-motion

Gideon Stein; Ofer Mano; Amnon Shashua

We describe a robust method for computing the ego-motion of the vehicle relative to the road using input from a single camera mounted next to the rear view mirror. Since feature points are unreliable in cluttered scenes we use direct methods where image values in the two images are combined in a global probability function. Combined with the use of probability distribution matrices, this enables the formulation of a robust method that can ignore large number of outliers as one would encounter in real traffic situations. The method has been tested in real world environments and has been shown to be robust to glare, rain and moving objects in the scene.


computer vision and pattern recognition | 2005

A Computer Vision System on a Chip: a case study from the automotive domain

Gideon Stein; Elchanan Rushinek; Gaby Hayun; Amnon Shashua

The automotive market puts strict and often conflicting requirements on computer vision systems. On the one hand the algorithms require considerable computing power to work reliably in real-time and under a wide range of lighting conditions. On the other hand, the cost must be kept low, the package size must be small and the power consumption must be low. In addition, automotive qualified parts must be used both to withstand the harsh operating environment and to guarantee long product life. To meet all these conflicting requirements Mobileye developed the EyeQ, a complete ’system on a chip’ (SoC) which has the computing power to support a variety of applications such as lane, vehicle and pedestrian detection. This paper describes the process of designing an ASIC to support a family of vision algorithms.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Model-based brightness constraints: on direct estimation of structure and motion

Gideon Stein; Amnon Shashua

We describe a direct method for estimating structure and motion from image intensities of multiple views. We extend the direct methods of Horn and Weldon (1988) to three views. Adding the third view enables us to solve for motion and compute a dense depth map of the scene, directly from image spatio-temporal derivatives in a linear manner without first having to find point correspondences or compute optical flow. We describe the advantages and limitations of this method which are then verified with experiments using real images.


ieee intelligent vehicles symposium | 2004

Solid or not solid: vision for radar target validation

Amir Sole; Ofer Mano; Gideon Stein; Hiroaki Kumon; Yukimasa Tamatsu; Amnon Shashua

In the context of combining radar and vision sensors for a fusion application in dense city traffic situations, one of the major challenges is to be able to validate radar targets. We take a high-level fusion approach assuming that both sensor modalities have the capacity to independently locate and identify targets of interest. In this context, radar targets can either correspond to a vision target- in which case the target is validated without further processing- or not. It is the latter case that drives the focus of this paper. A non-matched radar target can correspond to some solid object which is not part of the objects of interest of the vision sensor (such as a guard-rail) or can be caused by reflections in which case it is a ghost target which does not match any physical object in the real world. We describe a number of computational steps for the decision making of non-matched radar targets. The computations combine both direct motion parallax measurements and indirect motion analysis- which are not sufficient for computing parallax but are nevertheless quite effective- and pattern classification steps for covering situations which motion analysis are weak or ineffective. One of the major advantages of our high-level fusion approach is that it allows the use of simpler (low cost) radar technology to create a combined high performance system.

Collaboration


Dive into the Gideon Stein's collaboration.

Top Co-Authors

Avatar

Amnon Shashua

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Ofer Mano

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Lily Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Raquel A. Romano

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge