Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shahriar Negahdaripour is active.

Publication


Featured researches published by Shahriar Negahdaripour.


Journal of The Optical Society of America A-optics Image Science and Vision | 1988

Closed-form solution of absolute orientation using orthonormal matrices

Berthold K. P. Horn; Hugh M. Hilden; Shahriar Negahdaripour

Finding the relationship between two coordinate systems by using pairs of measurements of the coordinates of a number of points in both systems is a classic photogrammetric task. The solution has applications in stereophotogrammetry and in robotics. We present here a closed-form solution to the least-squares problem for three or more points. Currently, various empirical, graphical, and numerical iterative methods are in use. Derivation of a closed-form solution can be simplified by using unit quaternions to represent rotation, as was shown in an earlier paper [ J. Opt. Soc. Am. A4, 629 ( 1987)]. Since orthonormal matrices are used more widely to represent rotation, we now present a solution in which 3 × 3 matrices are used. Our method requires the computation of the square root of a symmetric matrix. We compare the new result with that obtained by an alternative method in which orthonormality is not directly enforced. In this other method a best-fit linear transformation is found, and then the nearest orthonormal matrix is chosen for the rotation. We note that the best translational offset is the difference between the centroid of the coordinates in one system and the rotated and scaled centroid of the coordinates in the other system. The best scale is equal to the ratio of the root-mean-square deviations of the coordinates in the two systems from their respective centroids. These exact results are to be preferred to approximate methods based on measurements of a few selected points.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1987

Direct Passive Navigation

Shahriar Negahdaripour; Berthold K. P. Horn

In this correspondence, we show how to recover the motion of an observer relative to a planar surface from image brightness derivatives. We do not compute the optical flow as an intermediate step, only the spatial and temporal brightness gradients (at a minimum of eight points). We first present two iterative schemes for solving nine nonlinear equations in terms of the motion and surface parameters that are derived from a least-squares fomulation. An initial pass over the relevant image region is used to accumulate a number of moments of the image brightness derivatives. All of the quantities used in the iteration are efficiently computed from these totals without the need to refer back to the image. We then show that either of two possible solutions can be obtained in closed form. We first solve a linear matrix equation for the elements of a 3 × 3 matrix. The eigenvalue decomposition of the symmetric part of the matrix is then used to compute the motion parameters and the plane orientation. A new compact notation allows us to show easily that there are at most two planar solutions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Revised definition of optical flow: integration of radiometric and geometric cues for dynamic scene analysis

Shahriar Negahdaripour

Optical flow has been commonly defined as the apparent motion of image brightness patterns in an image sequence. In this paper, we propose a revised definition to overcome shortcomings in interpreting optical flow merely as a geometric transformation field. The new definition is a complete representation of geometric and radiometric variations in dynamic imagery. We argue that this is more consistent with the common interpretation of optical flow induced by various scene events. This leads to a general framework for the investigation of problems in dynamic scene analysis, based on the integration and unified treatment of both geometric and radiometric cues in time-varying imagery. We discuss selected models, including the generalized dynamic image model, for the estimation of optical flow. We show how various 3D scene information are encoded in, and thus may be extracted from, the geometric and radiometric components of optical flow. We provide selected examples based on experiments with real images.


IEEE Transactions on Biomedical Engineering | 2000

A new method for the extraction of fetal ECG from the composite abdominal signal

Ali Khamene; Shahriar Negahdaripour

We developed a wavelet transform-based method to extract the fetal electrocardiogram (ECG) from the composite abdominal signal. This is based on the detection of the singularities obtained from the composite abdominal signal, using the modulus maxima in the wavelet domain. Modulus maxima locations of the abdominal signal are used to discriminate between maternal and fetal ECG signals. Two different approaches have been considered, In the first approach, at least one thoracic signal is used as the a prior to perform the classification whereas in the second approach no thoracic signal is needed, A reconstruction method is utilized to obtain the fetal ECG signal from the detected fetal modulus maxima. The proposed technique is different from the classical time-domain methods, in that we exploit the most distinct features of the signal, leading to more robustness with respect to signal perturbations. Results of experiments with both synthetic and real ECG data have been presented to demonstrate the efficacy of the proposed method.


international conference on computer vision | 1993

A generalized brightness change model for computing optical flow

Shahriar Negahdaripour; Chih Ho Yu

The authors propose an image motion constraint equation based on a model which allows the brightness of a scene point to vary with time, unlike the case in the brightness constancy model. Using this model, they describe a method for the computation of optical flow and investigate its performance in a variety of conditions involving brightness variations of scene points, due to illumination nonuniformity, light source motion, specular reflection, and/or interreflection. It is shown that in the application of this method, care must be taken in the estimation of image derivatives using finite difference methods to prevent biases in the solution. A simple modification is suggested to overcome the problem. A comparison is made with two other models, including the classical brightness constancy model, through results from experiments with real images.<<ETX>>


International Journal of Computer Vision | 1991

Motion recovery from image sequences using First-order optical flow information

Shahriar Negahdaripour; Shinhak Lee

The primary goal in motion vision is to extract information about the motion and shape of an object in a scene that is encoded in the optic flow. While many solutions to this problem, both iterative and in closed form, have been proposed, practitioners still view the problem as unsolved, since these methods, for the most part, cannot deal with some important aspects of realistic scenes. Among these are complex unsegmented scenes, nonsmooth objects, and general motion of the camera. In addition, the performance of many methods degrades ungracefully as the quality of the data deteriorates.Here, we will derive a closed-form solution for motion estimation based on thefirst-order information from two image regions with distinct flow “structures”. A unique solution is guaranteed when these corespond to two surface patches with different normal vectors. Given an image sequence, we will show how the image may be segmented into regions with the necessary properties, optical flow is computed for these regions, and motion parameters are calculated. The method can be applied to arbitrary scenes and any camera motion. We will show theoretically why the method is more robust than other proposed techniques that require the knowledge of the full flow or information up to the second-order terms of it. Experimental results are presented to support the theoretical derivations.


IEEE Journal of Oceanic Engineering | 2006

An ROV Stereovision System for Ship-Hull Inspection

Shahriar Negahdaripour; Pezhman Firoozfam

Ship hulls, as well as bridges, port dock pilings, dams, and various underwater structures need to be inspected for periodic maintenance. Additionally, there is a critical need to provide protection against sabotage activities, and to establish effective countermeasures against illegal smuggling activities. Unmanned underwater vehicles are suitable platforms for the development of automated inspection systems, but require integration with appropriate sensor technologies. This paper describes a vision system for automated ship-hull inspection, based on computing the necessary information for positioning, navigation, and mapping of the hull from stereo images. Binocular cues are critical in resolving a number of complex visual artifacts that hamper monocular vision in shallow-water conditions. Furthermore, they simplify the estimation of vehicle pose and motion, which is fundamental for successful automatic operation. The system has been implemented on a commercial remotely operated vehicle (ROV), and tested in pool and dock tests. Results from various trials are presented to demonstrate the system capabilities


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1989

A direct method for locating the focus of expansion

Shahriar Negahdaripour; Berthold K. P. Horn

We address the problem of recovering the motion of a monocular observer relative to a rigid scene. We do not make any assumptions about the shapes of the surfaces in the scene, nor do we use estimates of the optical flow or point correspondences. Instead, we exploit the spatial gradient and the time rate of change of brightness over the whole image and explicitly impose the constraint that the surface of an object in the scene must be in front of the camera for it to be imaged.


IEEE Journal of Oceanic Engineering | 2002

Mosaic-based positioning and improved motion-estimation methods for automatic navigation of submersible vehicles

Shahriar Negahdaripour; Xun Xu

Knowledge of the camera trajectory, which may be determined from the motions between consecutive frames of a video clip, can be used to register images for constructing image mosaics. We discuss a mosaic-based positioning framework for building photo-mosaics and concurrently utilizing them for improved positioning. In this approach, the mosaic is directly exploited in bounding the accumulation of position errors as we integrate the incremental motions of the camera. It is also shown that two earlier closed-form solutions for the estimation of motion directly from spatio-temporal image gradients, as for most gradient-based techniques based on the application of linear(ized) image motion constraint equations, are corrupted with systematic biases. These can be reduced significantly by incorporating the higher-order terms. We propose recursive methods to solve the new nonlinear constraint equations, and investigate the performance of the new solutions in a number of experiments with synthetic and real data.


IEEE Journal of Oceanic Engineering | 2003

Stereovision imaging on submersible platforms for 3-D mapping of benthic habitats and sea-floor structures

Shahriar Negahdaripour; Hossein Madjidi

We investigate the deployment of a submersible platform with stereovision imaging capability for three-dimensional (3D) mapping of benthic habitats and other sea-floor structures over local areas. A complete framework is studied, comprising: 1) suitable trajectories to be executed for data collection; 2) data processing for positioning and trajectory followed by online frame-to-frame and frame-to-mosaic registration of images, as well as recursive global realignment of positions along the path; and 3) 3D mapping by the fusion of various visual cues, including motion and stereo within a Kalman filter. The computational requirements of the system are evaluated, formalizing how processing may be achieved in real time. The proposed scenario is simulated for testing with known ground truth to assess the system performance, to quantify various errors, and to identify how performance may be improved. Experiments with underwater images are also presented to verify the performance of various components and the overall scheme.

Collaboration


Dive into the Shahriar Negahdaripour's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xun Xu

University of Miami

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Berthold K. P. Horn

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa N. Brisson

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge