Anuj Dev
University of Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anuj Dev.
IEEE Transactions on Image Processing | 2000
Hieu Tat Nguyen; Marcel Worring; Anuj Dev
This correspondence deals with the segmentation of a video clip into independently moving visual objects. This is an important step in structuring video data for storage in digital libraries. The method follows a bottom-up approach. The major contribution is a new well-founded measure for motion similarity leading to a robust method for merging regions. The improvements with respect to existing methods have been confirmed by experimental results.
Lecture Notes in Computer Science | 1999
Jan-Mark Geusebroek; Anuj Dev; Rein van den Boomgaard; Arnold W. M. Smeulders; Hugo Geerts
Segmentation based on color, instead of intensity only, provides an easier distinction between materials, on the condition that robustness against irrelevant parameters is achieved, such as illumination source, shadows, geometry and camera sensitivities. Modeling the physical process of the image formation provides insight into the effect of Different parameters on object color. In this paper, a color Differential geometry approach is used to detect material edges, invariant with respect to illumination color and imaging conditions. The performance of the color invariants is demonstrated by some real-world examples, showing the invariants to be successful in discounting shadow edges and illumination color.
Image and Vision Computing | 2000
Ben J. A. Kröse; Anuj Dev; Frans C. A. Groen
Abstract If a camera, mounted on a mobile robot, moves on a straight line, the optic flow field is a diverging vector field, of which the singularity is called “focus of expansion” (FOE). An object that is seen in this FOE is located on the future path of the camera. However, a mobile robot usually moves on a curved path such that the future path is no longer a point in the image domain, but a line. All objects which are on the future path (and thus will cause collisions) are projected on this line. Not necessarily the reverse is true: not all points on the line result in collisions. In this paper we derive how the optic flow can be used to compute which objects in the image are projections of future collisions. Experiments under controlled conditions are carried out to test the theory.
Intelligent Robots and Computer Vision XIII: 3D Vision, Product Inspection, and Active Vision | 1994
Anuj Dev; Ben J. A. Kröse; Leo Dorst; Francis C.A. Groen
A collision is an event where the robot path intersects with an object in the environment. Collisions can be desired if the object is a goal, or undesired if the object is an obstacle. We call the place of intersection a collision point. Prediction of collision points relies on a continuity assumption of the robot motion such as constant velocity. The robot is equipped with monocular vision to sense its environment. Motion of the robot results in motion of the environment in the sensory domain. The optic flow equals the projection of the environment motion on the image plane. We show that under the continuity assumption described above, the collision points can be computed from the optic flow without deriving a model of the environment. We mainly consider a mobile robot. We derive the collision points by introducing an invariant, the curvature scaled depth. This invariant couples the rotational velocity of the robot to its translational velocity and is closely related to the curvature of the mobile robots path. We show that the spatial derivatives of the curvature scaled depth give the object surface orientation.
Neural Networks : Artificial intelligence and industrial applications | 1995
Anuj Dev; Ben J. A. Kröse; Frans C. A. Groen
The optic flow is the vector field formed by the projection of the 3D-motion in the environment on the image plane of the observer. The optic flow vector o at image location r is thus a function of the scene depth z r and the relative scene motion m r = (t, ω) r which can be written as
ISIS Techn. Rep. | 2000
Jan-Mark Geusebroek; R. van den Boomgaard; Arnold W. M. Smeulders; Anuj Dev
Netherlands Heart Journal | 1997
Anuj Dev; Ben J. A. Krse; Frans C. A. Groen
o_r = C(r,{m_r},{z_r})
Netherlands Heart Journal | 1997
Anuj Dev; Ben J. A. Kröse; Frans C. A. Groen
Netherlands Heart Journal | 1997
Ben J. A. Kröse; Anuj Dev; X. Benavent; Frans C. A. Groen
where C is some non-linear function which only depends on the camera mapping.
international conference on multimedia computing and systems | 1999
Hieu Tat Nguyen; Marcel Worring; Anuj Dev