John Oliensis
Stevens Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Oliensis.
computer vision and pattern recognition | 2003
Shan Lu; Dimitris N. Metaxas; Dimitris Samaras; John Oliensis
We present a model based approach to the integration of multiple cues for tracking high degree of freedom articulated motions and model refinement. We then apply it to the problem of hand tracking using a single camera sequence. Hand tracking is particularly challenging because of occlusions, shading variations, and the high dimensionality of the motion. The novelty of our approach is in the combination of multiple sources of information, which come from edges, optical flow, and shading information in order to refine the model during tracking. We first use a previously formulated generalized version of the gradient-based optical flow constraint, that includes shading flow i.e., the variation of the shading of the object as it rotates with respect to the light source. Using this model we track its complex articulated motion in the presence of shading changes. We use a forward recursive dynamic model to track the motion in response to data derived 3D forces applied to the model. However, due to inaccurate initial shape, the generalized optical flow constraint is violated. We use the error in the generalized optical flow equation to compute generalized forces that correct the model shape at each step. The effectiveness of our approach is demonstrated with experiments on a number of different hand motions with shading changes, rotations and occlusions of significant parts of the hand.
computer vision and pattern recognition | 1992
Paul Dupuis; John Oliensis
An approach to shape-from-shading that is based on a connection with a calculus of variations/optimal control problem is proposed. An explicit representation corresponding to a shaded image is given for the surface; uniqueness of the surface (under suitable conditions) is an immediate consequence. The approach leads naturally to an algorithm for shape reconstruction that is simple, fast, provably convergent (in many cases, provably convergent to the correct solution), and does not require regularization. Given a continuous image, the algorithm can be proved to converge to the continuous surface solution as the image sampling frequency is taken to infinity. Experimental results are presented for synthetic and real images.<<ETX>>
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007
John Oliensis; Richard I. Hartley
We give the first complete theoretical convergence analysis for the iterative extensions of the Sturm/Triggs algorithm. We show that the simplest extension, SIESTA, converges to nonsense results. Another proposed extension has similar problems, and experiments with balanced iterations show that they can fail to converge or become unstable. We present CIESTA, an algorithm which avoids these problems. It is identical to SIESTA except for one simple extra computation. Under weak assumptions, we prove that CIESTA iteratively decreases an error and approaches fixed points. With one more assumption, we prove it converges uniquely. Our results imply that CIESTA gives a reliable way of initializing other algorithms such as bundle adjustment. A descent method such as Gauss-Newton can be used to minimize the CIESTA error, combining quadratic convergence with the advantage of minimizing in the projective depths. Experiments show that CIESTA performs better than other iterations.
international conference on computer vision | 1993
John Oliensis; Paul Dupuis
A global algorithm for reconstructing shape from shading is described. This algorithm incorporates an earlier local algorithm that has been shown to be capable of fast, robust surface reconstruction for general surfaces if a small amount of information on the surface is provided. The new algorithm is capable of determining this information automatically, and thus can reconstruct a general surface from shading with no a priori information on the surface. In experimental tests on complex synthetic images, this algorithm has produced good surface reconstructions over most of the image. For 128 /spl times/ 128 images, the reconstruction took less than 30 s on a DECstation 5000. The algorithm appears noise resistant, giving good reconstructions even with an added pixel noise of /spl plusmn/10%.<<ETX>>
Computer Vision and Image Understanding | 2010
Hongzhi Wang; John Oliensis
One approach to image segmentation defines a function of image partitions whose maxima correspond to perceptually salient segments. We extend previous approaches following this framework by requiring that our image model sharply decreases in its power to organize the image as a segments boundary is perturbed from its true position. Instead of making segment boundaries prefer image edges, we add a term to the objective function that seeks a sharp change in fitness with respect to the entire contours position, generalizing from edge detections search for sharp changes in local image brightness. We also introduce a prior on the shape of a salient contour that expresses the observed multi-scale distribution of contour curvature for physical contours. We show that our new term correlates strongly with salient structure. We apply our method to real images and verify that the new term improves performance. Comparisons with other state-of-the-art approaches validate our methods advantages.
european conference on computer vision | 1996
John Oliensis
We analyze the problem of recovering rotation from two image frames, deriving an exact bound on the error size. With the single weak requirement that the average translational image displacements are smaller than the field of view, we demonstrate rigorously and validate experimentally that the error is small. These results form part of our correctness proof for a recently developed algorithm for recovering structure and motion from a multiple image sequence. In addition, we argue and demonstrate experimentally that in the complementary domain when the translation is large the whole motion can typically be recovered robustly, assuming the 3D points vary significantly in their depths.
european conference on computer vision | 2005
John Oliensis
We analyze the least-squares error for structure from motion with a single infinitesimal motion (structure from optical flow). We present asymptotic approximations to the noiseless error over two, complementary regions of motion estimates: roughly forward and non-forward translations. Our approximations are powerful tools for understanding the error. Experiments show that they capture its detailed behavior over the entire range of motions. We illustrate the use of our approximations by deriving new properties of the least-squares error. We generalize the earlier results of Jepson/Heeger/Maybank on the bas-relief ambiguity and of Oliensis on the reflected minimum. We explain the errors complexity and its multiple local minima for roughly forward translation estimates (epipoles within the field of view) and identify the factors that make this complexity likely. For planar scenes, we clarify the effects of the two-fold ambiguity, show the existence of a new, double bas-relief ambiguity, and analyze the errors local minima. For nonplanar scenes, we derive simplified error approximations for reasonable assumptions on the image and scene. For example, we show that the error tends to have a simpler form when many points are tracked. We show experimentally that our analysis for zero image noise gives a good model of the error for large noise. We show theoretically and experimentally that the error for projective structure from motion is simpler but flatter than the error for calibrated images.
european conference on computer vision | 2002
René Vidal; John Oliensis
We study the multi-frame structure from motion problem when the camera translates on a plane with small baselines and arbitrary rotations. This case shows up in many practical applications, for example, in ground robot navigation. We consider the framework for small baselines presented in [8], in which a factorization method is used to compute the structure and motion parameters accurately, efficiently and with guaranteed convergence. When the camera translates on a plane, the algorithm in [8] cannot be applied because the estimation matrix drops rank, causing the equations to be no longer linear. In this paper, we show how to linearly solve those equations, while preserving the accuracy, speed and convergence properties of the non-planar algorithm. We evaluate the proposed algorithms on synthetic and real image sequences, and compare our results with those of the optimal algorithm. The proposed algorithms are very fast and accurate, have less than 0.3% outliers and work well for small-to-medium baselines and non-planar as well as planar motions.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010
Hongzhi Wang; John Oliensis
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extension, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the ¿central¿ segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
computer vision and pattern recognition | 1993
R. Manmatha; John Oliensis
Image deformations due to relative motion between an observer and an object may be used to infer 3-D structure. Up to the first order, these deformations can be written in terms of an affine transform. A novel approach is adopted to measuring affine transforms which correctly handles the problem of corresponding deformed patches. The patches are filtered using Gaussians and derivatives of Gaussians and the filters deformed according to the affine transform. The problem of finding the affine transform is therefore reduced to that of finding the appropriate deformed filter to use. In the special case where the affine transform can be written as a scale change and an in-plane rotation, the Gaussian and first derivative equations are solved for the scale. The robustness of the method is demonstrated experimentally.<<ETX>>