Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chandra Kambhamettu is active.

Publication


Featured researches published by Chandra Kambhamettu.


Clinical Linguistics & Phonetics | 2005

Automatic contour tracking in ultrasound images

Min Li; Chandra Kambhamettu; Maureen Stone

In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high‐contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In our tracking system, a novel active contour model is developed. Unlike the classical active contour models which only use gradient of the image as the image force, the proposed model incorporates the edge gradient and intensity information in local regions around each snake element. Different from other active contour models that use homogeneity of intensity in a region as the constraint and thus are only applied to closed contours, the proposed model applies local region information to open contours and can be used to track partial tongue surfaces in ultrasound images. The contour orientation is also taken into account so that any unnecessary edges in ultrasound images will be discarded. Dynamic programming is used as the optimisation method in our implementation. The proposed active contour model has been applied to human tongue tracking and its robustness and accuracy have been verified by quantitative comparison analysis to the tracking by speech scientists.


international conference on computer vision | 2009

Learning based digital matting

Yuanjie Zheng; Chandra Kambhamettu

We cast some new insights into solving the digital matting problem by treating it as a semi-supervised learning task in machine learning. A local learning based approach and a global learning based approach are then produced, to fit better the scribble based matting and the trimap based matting, respectively. Our approaches are easy to implement because only some simple matrix operations are needed. They are also extremely accurate because they can efficiently handle the nonlinear local color distributions by incorporating the kernel trick, that are beyond the ability of many previous works. Our approaches can outperform many recent matting methods, as shown by the theoretical analysis and comprehensive experiments. The new insights may also inspire several more works.


workshop on applications of computer vision | 2015

Deeply-Learned Feature for Age Estimation

Xiaolong Wang; Rui Guo; Chandra Kambhamettu

Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.


computer vision and pattern recognition | 2001

On 3D scene flow and structure estimation

Ye Zhang; Chandra Kambhamettu

In this paper, novel algorithms computing dense 3D scene flow from multiview image sequences are described. A new hierarchical rule-based stereo matching algorithm is presented to estimate the initial disparity map. Different available constraints under a multiview camera setup are investigated and then utilized in the proposed motion estimation algorithms. We show two different formulations for 3D scene flow computation. One formulation assumes that initial disparity map is accurate while the other does not make this assumption. Image segmentation information is used to maintain the motion and depth discontinuities. Iterative implementations are used to successfully compute 3D scene flow and structure at every point in the reference image. Novel hard constraints are introduced in this paper to make the algorithms more accurate and robust. Promising experimental results are seen by applying our algorithms to real imagery.


computer vision and pattern recognition | 1992

Point correspondence recovery in non-rigid motion

Chandra Kambhamettu; Dmitry B. Goldgof

A method for the estimation of point correspondences on a surface undergoing nonrigid motion, based on changes in Gaussian curvature, is described. An approach for estimating the point correspondences and stretching of a surface undergoing conformal motion with constant (homothetic), linear, or polynomial stretching is proposed. Small motion assumption is utilized to hypothesize all possible point correspondences. Curvature changes are then computed for each hypothesis. The difference between computed curvature changes and the one predicted by the conformal motion assumption is calculated. The hypothesis with the smallest error gives point correspondences between consecutive time frames. Simulations performed on ellipsoidal data illustrate the performance and accuracy of derived algorithms. The algorithm is applied to volumetric CT data of the left ventricle of a dogs heart.<<ETX>>


computer vision and pattern recognition | 2000

Integrated 3D scene flow and structure recovery from multiview image sequences

Ye Zhang; Chandra Kambhamettu

Scene flow is the 3D motion field of points in the world. Given N (N>1) image sequences gathered with a N-eye stereo camera or N calibrated cameras, we present a novel system which integrates 3D scene flow and structure recovery in order to complement each others performance. We do not assume rigidity of the scene motion, thus allowing for non-rigid motion in the scene. In our work, images are segmented into small regions. We assume that each small region is undergoing similar motion, represented by a 3D affine model. Nonlinear motion model fitting based on both optical flow constraints and stereo constraints is then carried over each image region in order to simultaneously estimate 3D motion correspondences and structure. To ensure the robustness, several regularization constraints are also introduced. A recursive algorithm is designed to incorporate the local and regularization constraints. Experimental results on both synthetic and real data demonstrate the effectiveness of our integrated 3D motion and structure analysis scheme.


computer vision and pattern recognition | 2008

Single-image vignetting correction using radial gradient symmetry

Yuanjie Zheng; Jingyi Yu; Sing Bing Kang; Stephen Lin; Chandra Kambhamettu

In this paper, we present a novel single-image vignetting method based on the symmetric distribution of the radial gradient (RG). The radial gradient is the image gradient along the radial direction with respect to the image center. We show that the RG distribution for natural images without vignetting is generally symmetric. However, this distribution is skewed by vignetting. We develop two variants of this technique, both of which remove vignetting by minimizing asymmetry of the RG distribution. Compared with prior approaches to single-image vignetting correction, our method does not require segmentation and the results are generally better. Experiments show our technique works for a wide range of images and it achieves a speed-up of 4-5 times compared with a state-of-the-art method.


computer vision and pattern recognition | 1998

Extraction and tracking of the tongue surface from ultrasound image sequences

Yusuf Sinan Akgul; Chandra Kambhamettu; Maureen Stone

This paper presents a system for automatic extraction and tracking of 2D contours of the tongue surfaces from digital ultrasound image sequences. The input to the system is provided by a Head and Transducer Support System (HATS), which is developed for use in ultrasound imaging of the tongue movement. We developed a novel active contour (snakes) model that uses several temporally adjacent images during the extraction of the tongue surface contour for an image frame. The user supplies an initial contour model for a single image frame in the whole sequence. Using optical flow and multi-resolution methods, this initial contour is then used to find the candidate contour points in the temporally immediate adjacent images. Subsequently, the new snake mechanism is applied to estimate optimal contours for each image frame using these candidate points. In turn, the extracted contours are used as models for the extraction process of new adjacent frames. Finally, the system uses a novel postprocessing technique to refine the positions of the contours. We tested the system on 11 different speech sequences, each containing about 25 images. Visual inspection of the detected contours by the speech experts shows that the results are very promising and this system can be effectively employed in speech and swallowing research.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Tracking nonrigid motion and structure from 2D satellite cloud images without correspondences

Lin Zhou; Chandra Kambhamettu; Dmitry B. Goldgof; Kannappan Palaniappan; A.F. Hasler

Tracking both structure and motion of nonrigid objects from monocular images is an important problem in vision. In this paper, a hierarchical method which integrates local analysis (that recovers small details) and global analysis (that appropriately limits possible nonrigid behaviors) is developed to recover dense depth values and nonrigid motion from a sequence of 2D satellite cloud images without any prior knowledge of point correspondences. This problem is challenging not only due to the absence of correspondence information but also due to the lack of depth cues in the 2D cloud images (scaled orthographic projection). In our method, the cloud images are segmented into several small regions and local analysis is performed for each region. A recursive algorithm is proposed to integrate local analysis with appropriate global fluid model constraints, based on which a structure and motion analysis system, SMAS, is developed. We believe that this is the first reported system in estimating dense structure and nonrigid motion under scaled orthographic views using fluid model constraints. Experiments on cloud image sequences captured by meteorological satellites (GOES-8 and GOES-9) have been performed using our system, along with their validation and analyses. Both structure and 3D motion correspondences are estimated to subpixel accuracy. Our results are very encouraging and have many potential applications in earth and space sciences, especially in cloud models for weather prediction.


conference on information and knowledge management | 2006

Efficient model selection for regularized linear discriminant analysis

Jieping Ye; Tao Xiong; Qi Li; Ravi Janardan; Jinbo Bi; Vladimir Cherkassky; Chandra Kambhamettu

Classical Linear Discriminant Analysis (LDA) is not applicable for small sample size problems due to the singularity of the scatter matrices involved. Regularized LDA (RLDA) provides a simple strategy to overcome the singularity problem by applying a regularization term, which is commonly estimated via cross-validation from a set of candidates. However, cross-validation may be computationally prohibitive when the candidate set is large. An efficient algorithm for RLDA is presented that computes the optimal transformation of RLDA for a large set of parameter candidates, with approximately the same cost as running RLDA a small number of times. Thus it facilitates efficient model selection for RLDA.An intrinsic relationship between RLDA and Uncorrelated LDA (ULDA), which was recently proposed for dimension reduction and classification is presented. More specifically, RLDA is shown to approach ULDA when the regularization value tends to zero. That is, RLDA without any regularization is equivalent to ULDA. It can be further shown that ULDA maps all data points from the same class to a common point, under a mild condition which has been shown to hold for many high-dimensional datasets. This leads to the overfitting problem in ULDA, which has been observed in several applications. Thetheoretical analysis presented provides further justification for the use of regularization in RLDA. Extensive experiments confirm the claimed theoretical estimate of efficiency. Experiments also show that, for a properly chosen regularization parameter, RLDA performs favorably in classification, in comparison with ULDA, as well as other existing LDA-based algorithms and Support Vector Machines (SVM).

Collaboration


Dive into the Chandra Kambhamettu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guoyu Lu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Mani Thomas

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Li

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge