Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Raviv is active.

Publication


Featured researches published by Daniel Raviv.


international conference on robotics and automation | 1989

Reconstruction of three-dimensional surfaces from two-dimensional binary images

Daniel Raviv; Yoh-Han Pao; Kenneth A. Loparo

The authors describe a method for reconstruction of three-dimensional visible and invisible opaque surfaces using moving shadows. An object whose shape is to be determined is placed on a reference surface. A beam of substantially parallel rays of light is projected at the object at a set of different angles relative to the reference surface. Using a camera which is placed above the reference surface, the shadows cast by the object for each angle are transferred to a computer. A three-dimensional binary level shadow diagram (3DBL shadowgram) is formed and analyzed. The shadowgram has some features which make the reconstruction very simple: a section of the 3DBL shadowgram, referred to as a 2DBL shadowgram, can be used to determine the heights of points of the object to be reconstructed. Further analysis of some curves of the shadowgram can be used for the partial reconstruction of invisible surfaces. A set of experimental results to test the effects of the threshold, camera resolution, and the number of pictures demonstrates the robustness and usefulness of the method. >


Proceedings of the IEEE Workshop on Visual Motion | 1991

A new approach to vision and control for road following

Daniel Raviv; Martin Herman

The paper deals with a new, quantitative, vision-based approach to road following. It is based on the theoretical framework of the recently developed optical flow-based visual field theory. By building on this theory, the authors suggest that motion commands can be generated from a visual feature, or cue, consisting of the projection into the image of the tangent point on the edge of the road, along with the optical flow of this point. Using this cue, they suggest several different vision-based control approaches. There are several advantages to using this visual cue: (1) it is extracted directly from the image, i.e. there is no need to reconstruct the scene, (2) it can be used in a tight perception-action loop to directly generate action commands, (3) for many road following situations this visual cue is sufficient, (4) it has a scientific basis, and (5) the related computations are relatively simple and thus suitable for real-time applications. For each control approach, they derive the value of the related steering commands.<<ETX>>


IEEE Transactions on Systems, Man, and Cybernetics | 1994

A unified approach to camera fixation and vision-based road following

Daniel Raviv; Martin Herman

Both camera fixation and vision-based road following are problems that involve tracking or fixating on 3-D points and features. This paper presents a unified theoretical approach to analyzing camera fixation and vision-based road following. The approach is based on the concept of equal flow circles (EFCs) and zero flow circles (ZFCs). Using EFCs it is possible to locate points in space relative to the fixation point, and predict the behavior. The cameras instantaneous direction of translation and the fixation point determine the plane on which the EFCs can be found. We show that points on an EFC inside the ZFC produce optical flow that is opposite in sign to that produced by points outside the ZFC. When a point in space crosses a ZFC it produces zero flow. For explanation purposes we analyzed a special case of motion. However, a similar approach can be taken for a more general motion of the camera. The analysis for the current motion can also be extended to find equal flow curves. >


international conference on robotics and automation | 1990

Towards an understanding of camera fixation

Daniel Raviv; Martin Herman

A fixation point is a point in 3-D space that projects to zero optical flow in an image over some period of time while the camera is moving. Quantitative aspects of fixation for a static scene are treated. For the case where the rotation axis of the camera is perpendicular to the instantaneous translation vector, it is shown that there is an infinite number of points that produce zero instantaneous optical flow. These points lie on a circle (called the zero flow circle, or ZFC) and a line. The ZFC changes its location and radius as a function of time, and the intersection of all the ZFCs is a fixation point. Points inside the ZFC produce optical flow that is opposite in sign to those that are outside the ZFC. This fact explains in a more quantitative way phenomena due to fixation. In particular, points in the neighborhood of the fixation point may change the sign of their optical flow as the camera moves. In a set of experiments, it is shown how the concept of the ZFC can be used to explain the optical flow produced by 3-D points near the fixation point.<<ETX>>


Computer Vision and Image Understanding | 1999

Novel Active Vision-Based Visual Threat Cue for Autonomous Navigation Tasks

Sridhar R. Kundur; Daniel Raviv

This paper deals with a novel vision-based motion cue called the Visual Threat Cue (VTC), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTC is atime-basedscalar parameter that provides some measure for arelative change in rangeas well asclearancebetween a 3D surface and amovingobserver. It is independent of the 3D environment around the observer and needs almost noa prioriknowledge about it. A practical method to extract the VTC from a sequence of images of 3D textured surfaces is also presented. It is based on the extraction of a global image dissimilarity measure called the Image Quality Measure (IQM), which is extracted directly from the raw data of the gray level images. Based on the relative variations of the measured IQM, the VTC is extracted. Several experimental results are also presented.


Pattern Recognition | 1995

2D feature tracking algorithm for motion analysis

Srivatsan Krishnan; Daniel Raviv

Abstract In this paper, we describe a local-neighborhood pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the three dimensional (3D) objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of a ‘correlation surface’ constructed from a local neighborhood in the first image of the sequence to be analysed. Any kind of motion, i.e., 6 DOF (translation and rotation), can be tolerated keeping in mind the pixels-per-frame motion limitations. No subpixel computations being necessary. Taking into account constraints of temporal continuity, the algorithm uses simple and efficient predictive tracking over multiple frames. Trajectories of features on multiple objects can also be computed. The algorithm accepts a slow, continuous change of brightness D.C. level in the pixels of the feature. Another important aspect of the algorithm is the use of an adaptive feature matching threshold that accounts for change in relative brightness of neighboring pixels. As applications of the feature tracking algorithm, and to test the accuracy of the tracking, we show how the algorithm has been used to extract the Focus of Expansion (FOE) and to compute the time-to-contact using real image sequences of unstructured, unknown environments. In both applications information from multiple frames is used.


computer vision and pattern recognition | 1991

A quantitative approach to camera fixation

Daniel Raviv

The quantitative aspects of camera fixation for a static scene are addressed. In general, when the camera undergoes translation and rotation, there is an infinite number of points that produce equal optical flow for any instantaneous point in time. Using a camera-centered spherical coordinate system, it is shown how to find these points in space. For the case where the rotation axis of the camera is perpendicular to the instantaneous translation vector, these points lie on cylinders. If the elevation component of the optical flow is set to zero then these points form a circle (called the equal flow circle or simply EFC) and a line, i.e. all points that lie on this circle or line are observed as having the same azimuthal optical flow. A special case of the EFCs is the zero flow circle (ZFC) where both components of the optical flow are equal to zero. A fixation point is the intersection of all the ZFCs. Points inside and outside the ZFC can be quantitatively mapped using the EFCs. It is shown how the concept of the EFC and ZFC can be used to explain the optical flow produced by points near the fixation point.<<ETX>>


computer vision and pattern recognition | 1996

Novel active-vision-based visual-threat-cue for autonomous navigation tasks

Sridhar R. Kundur; Daniel Raviv

This paper presents a new visual motion cue, we call the Visual Threat Cue (VTC) that provides some measure for a relative change in range as well as clearance between a 3D surface and a fixing observer in motion. The VTC corresponds to visual fields surrounding a moving observer. The fields are time-based imaginary 3-D surfaces that move with the observer. They are analogous to equi-potential fields of an electric dipole. A practical method to extract the VTC is presented. The approach is independent of the 3D surface texture and needs no optical flow information, 3D reconstruction, segmentation, feature tracking or pre-processing. This algorithm to extract the VTC was applied to several indoor as well as outdoor real images of textures, where we observed a similar behavior for most of the textures employed.


Pattern Recognition | 1998

A VISION-BASED PRAGMATIC STRATEGY FOR AUTONOMOUS NAVIGATION☆

Sridhar R. Kundur; Daniel Raviv

This paper presents a novel approach based on active-vision paradigm, for generating local collision-free paths for mobile robot navigation, in indoor as well as outdoor environments. Two measurable visual motion cues that provide some measure for a relative change in range as well clearance between a 3D surface and a visually fixating observer in motion are described. The visual fields associated with the cues can be used to demarcate regions around a moving observer into safe and danger zones of varying degree, which is suitable for making local decisions about the steering as well as speed commands to the vehicle.


computer vision and pattern recognition | 1997

An image-based visual-motion-cue for autonomous navigation

Sridhar R. Kundur; Daniel Raviv; Ernest Kent

This paper presents a novel time-based visual motion cue called the Hybrid Visual Threat Cue (HVTC) that provides some measure for a change in relative range as well as absolute clearances, between a 3D surface and a moving observer. It is shown that the HVTC is a linear combination of Time-To-Contact (TTC), visual looming and the Visual Threat Cue (VTC). The visual field associated with the HVTC can be used to demarcate the regions around a moving observer into safe and danger zones of varying degree, which may be suitable for autonomous navigation tasks. The HVTC is independent of the 3D environment and needs almost no a-priori information about it. It is rotation independent, and is measured in ~time/sup -1/\ units Several approaches to extract the HVTC, are suggested. Also a practical method to extract it from a sequence of images of a 3D textured surface obtained by a visually fixating, fixed-focus monocular camera in motion is presented. This approach of extracting the HVTC is independent of the type of 3D surface texture and needs no optical flow information, 3D reconstruction, segmentation, feature tracking.

Collaboration


Dive into the Daniel Raviv's collaboration.

Top Co-Authors

Avatar

Martin Herman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Sridhar R. Kundur

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

H Yakali

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

James S. Albus

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Brandon Moore

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Eiki Martinson

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

George Roskovich

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Kenneth A. Loparo

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Kunal Joarder

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Yoh-Han Pao

Case Western Reserve University

View shared research outputs
Researchain Logo
Decentralizing Knowledge