Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sunita L. Hingorani is active.

Publication


Featured researches published by Sunita L. Hingorani.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

An efficient implementation of Reid's multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking

Ingemar J. Cox; Sunita L. Hingorani

An efficient implementation of Reids multiple hypothesis tracking (MHT) algorithm is presented in which the k-best hypotheses are determined in polynomial time using an algorithm due to Murly (1968). The MHT algorithm is then applied to several motion sequences. The MHT capabilities of track initiation, termination, and continuation are demonstrated together with the latters capability to provide low level support of temporary occlusion of tracks. Between 50 and 150 corner features are simultaneously tracked in the image plane over a sequence of up to 51 frames. Each corner is tracked using a simple linear Kalman filter and any data association uncertainty is resolved by the MHT. Kalman filter parameter estimation is discussed, and experimental results show that the algorithm is robust to errors in the motion model. An investigation of the performance of the algorithm as a function of look-ahead (tree depth) indicates that high accuracy can be obtained for tree depths as shallow as three. Experimental results suggest that a real-time MHT solution to the motion correspondence problem is possible for certain classes of scenes.


Computer Vision and Image Understanding | 1996

A Maximum Likelihood Stereo Algorithm

Ingemar J. Cox; Sunita L. Hingorani; Satish Rao; Bruce M. Maggs

A stereo algorithm is presented that optimizes a maximum likelihood cost function. The maximum likelihood cost function assumes that corresponding features in the left and right images are normally distributed about a common true value and consists of a weighted squared error term if two features are matched or a (fixed) cost if a feature is determined to be occluded. The stereo algorithm finds the set of correspondences that maximize the cost function subject to ordering and uniqueness constraints. The stereo algorithm is independent of the matching primitives. However, for the experiments described in this paper, matching is performed on the


international conference on image processing | 1995

Dynamic histogram warping of image pairs for constant image brightness

Ingemar J. Cox; Sébastien Roy; Sunita L. Hingorani

cf4


International Journal of Computer Vision | 1993

A Bayesian multiple-hypothesis approach to edge grouping and contour segmentation

Ingemar J. Cox; James M. Rehg; Sunita L. Hingorani

individual pixel intensities.


international conference on pattern recognition | 1994

An efficient implementation and evaluation of Reid's multiple hypothesis tracking algorithm for visual tracking

Ingemar J. Cox; Sunita L. Hingorani

cf3


british machine vision conference | 1992

Stereo Without Disparity Gradient Smoothing: a Bayesian Sensor Fusion Solution

Ingemar J. Cox; Sunita L. Hingorani; Bruce M. Maggs; Satish Rao

Contrary to popular belief, the pixel-based stereo appears to be robust for a variety of images. It also has the advantages of (i) providing adensedisparity map, (ii) requiringnofeature extraction, and (iii)avoidingthe adaptive windowing problem of area-based correlation methods. Because feature extraction and windowing are unnecessary, a very fast implementation is possible. Experimental results reveal that good stereo correspondences can be found using only ordering and uniqueness constraints, i.e., withoutlocalsmoothness constraints. However, it is shown that the original maximum likelihood stereo algorithm exhibits multiple global minima. The dynamic programming algorithm is guaranteed to find one, but not necessarily the same one for each epipolar scanline, causing erroneous correspondences which are visible as small local differences between neighboring scanlines. Traditionally, regularization, which modifies the original cost function, has been applied to the problem of multiple global minima. We developed several variants of the algorithm that avoid classical regularization while imposing several global cohesiveness constraints. We believe this is a novel approach that has the advantage of guaranteeing that solutions minimize the original cost function and preserve discontinuities. The constraints are based on minimizing the total number of horizontal and/or vertical discontinuities along and/or between adjacent epipolar lines, and local smoothing is avoided. Experiments reveal that minimizing the sum of the horizontal and vertical discontinuities provides the most accurate results. A high percentage of correct matches and very little smearing of depth discontinuities are obtained. An alternative to imposing cohesiveness constraints to reduce the correspondence ambiguities is to use more than two cameras. We therefore extend the two camera maximum likelihood toNcameras. TheN-camera stereo algorithm determines the “best” set of correspondences between a given pair of cameras, referred to as the principal cameras. Knowledge of the relative positions of the cameras allows the 3D point hypothesized by an assumed correspondence of two features in the principal pair to be projected onto the image plane of the remainingN? 2 cameras. TheseN? 2 points are then used to verify proposed matches. Not only does the algorithm explicitly model occlusion between features of the principal pair, but the possibility of occlusions in theN? 2 additional views is also modeled. Previous work did not model this occlusion process, the benefits and importance of which are experimentally verified. Like other multiframe stereo algorithms, the computational and memory costs of this approach increase linearly with each additional view. Experimental results are shown for two outdoor scenes. It is clearly demonstrated that the number of correspondence errors is significantly reduced as the number of views/cameras is increased.


european conference on computer vision | 1992

A Bayesian Multiple Hypothesis Approach to Contour Grouping

Ingemar J. Cox; James M. Rehg; Sunita L. Hingorani

The constant image brightness (CIB) assumption assumes that the intensities of corresponding points in two images are equal. This assumption is central to much of computer vision. However, surprisingly little work has been performed to support this assumption, despite the fact the many of algorithms are very sensitive to deviations from CIB. An examination of the images contained in the SRI JISCT stereo database revealed that the constant image brightness assumption is indeed often false. Moreover, the simple additive/multiplicative models of the form I/sub L/=/spl beta/I/sub R/+/spl alpha/ do not adequately represent the observed deviations. A comprehensive physical model of the observed deviations is difficult to develop. However, many potential sources of deviations can be represented by a nonlinear monotonically increasing relationship between intensities. Under these conditions, we believe that an expansion/contraction matching of the intensity histograms represents the best method to both measure the degree of validity of the CIB assumption and correct for it. Dynamic histogram warping (DHW) is closely related to histogram specification. It is shown that histogram specification introduces artifacts that do not occur with dynamic histogram warping. Experimental results show that image histograms are closely matched after DHW, especially when both histograms are modified simultaneously. DHW is also capable of removing simple constant additive and multiplicative biases without derivative operations, thereby avoiding amplification of high frequency noise. DHW can improve the estimates from stereo and optical flow estimators.


international conference on acoustics, speech, and signal processing | 1994

Recursive tracking of formants in speech signals

Mahesan Niranjan; Ingemar J. Cox; Sunita L. Hingorani

A contour segmentation algorithm is presented that takes an edge map and extracts continuous curves of arbitrary smoothness, correctly handling curve intersections and capable of extrapolating over significant measurement gaps. The algorithm incorporates noise models of the edge-detection process and limited scene statistics. It is based on an explicit contour model and employs a statistical distance measure to quantify the likelihood of each segmentation hypothesis. A Bayesian multiple-hypothesis tree organizes possible segementations, making it possible to postpone grouping decisions until a sufficient amount of information is available. We have demonstrated its performance on real and synthetic images.


International Journal of Computer Vision | 1993

A Bayesian Multiple Hypothesis Approach to Contour Segmentation

Ingemar J. Cox; James M. Rehg; Sunita L. Hingorani

An efficient implementation of Reids multiple hypothesis tracking (MHT) algorithm is presented in which the the k-best hypotheses are determined in polynomial time using an algorithm due to Murty (1968). The MHT algorithm is then applied to several motion sequences. The MHT capabilities of track initiation, termination and continuation are demonstrated. Continuation allows the MHT to function despite temporary occlusion of tracks. Between 50 and 150 corner features are simultaneously tracked in the image plane over a sequence of up to 60 frames. Each corner is tracked using a simple linear Kalman filter and any data association uncertainty is resolved by the MHT. Kalman filter parameter estimation is discussed and experimental results show that the algorithm is robust to errors in the motion model.


International Journal of Computer Vision | 1993

A Bayesian multiple hypothesis approach to contour grouping and segmentation

Ingemar J. Cox; James M. Rehg; Sunita L. Hingorani

A maximum likelihood stereo algorithm is presented that avoids the need for smoothing based on disparity gradients, provided that the common uniqueness and monotonic ordering constraints are applied. A dynamic programming algorithm allows matching of the two epipolar lines of length N and M respectively in O(N M) time and in O(N) time if a disparity limit is set. The stereo algorithm is independent of the matching primitives. A high percentage of correct matches and little smearing of depth discontinuities is obtained based on matching individual pixel intensities. Because feature extraction and windowing are unnecessary, a very fast implementation is possible.

Collaboration


Dive into the Sunita L. Hingorani's collaboration.

Top Co-Authors

Avatar

Ingemar J. Cox

University College London

View shared research outputs
Top Co-Authors

Avatar

James M. Rehg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satish Rao

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge