Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ravishankar Sivalingam is active.

Publication


Featured researches published by Ravishankar Sivalingam.


european conference on computer vision | 2010

Tensor sparse coding for region covariances

Ravishankar Sivalingam; Daniel Boley; Vassilios Morellas; Nikolaos Papanikolopoulos

Sparse representation of signals has been the focus of much research in the recent years. A vast majority of existing algorithms deal with vectors, and higher-order data like images are usually vectorized before processing. However, the structure of the data may be lost in the process, leading to poor representation and overall performance degradation. In this paper we propose a novel approach for sparse representation of positive definite matrices, where vectorization would have destroyed the inherent structure of the data. The sparse decomposition of a positive definite matrix is formulated as a convex optimization problem, which falls under the category of determinant maximization (MAXDET) problems [1], for which efficient interior point algorithms exist. Experimental results are shown with simulated examples as well as in real-world computer vision applications, demonstrating the suitability of the new model. This forms the first step toward extending the cornucopia of sparsity-based algorithms to positive definite matrices.


international conference on robotics and automation | 2012

Compact covariance descriptors in 3D point clouds for object recognition

Duc Fehr; Anoop Cherian; Ravishankar Sivalingam; Sam Nickolay; Vassilios Morellas and; Nikolaos Papanikolopoulos

One of the most important tasks for mobile robots is to sense their environment. Further tasks might include the recognition of objects in the surrounding environment. Three dimensional range finders have become the sensors of choice for mapping the environment of a robot. Recognizing objects in point clouds provided by such sensors is a difficult task. The main contribution of this paper is the introduction of a new covariance based point cloud descriptor for such object recognition. Covariance based descriptors have been very successful in image processing. One of the main advantages of these descriptors is their relatively small size. The comparisons between different covariance matrices can also be made very efficient. Experiments with real world and synthetic data will show the superior performance of the covariance descriptors on point clouds compared to state-of-the-art methods.


advanced video and signal based surveillance | 2009

Counting People in Groups

Duc Fehr; Ravishankar Sivalingam; Vassilios Morellas; Nikolaos Papanikolopoulos; Osama A. Lotfallah; Youngchoon Park

Cameras are becoming a common tool for automated vision purposes due to their low cost. In an era of growing security concerns, camera surveillance systems have become not only important but also necessary. Algorithms for several tasks such as detecting abandoned objects and tracking people have already been successfully developed. While tracking people is relatively easy, counting people in groups is much more challenging. The mutual occlusions between people in a group make it difficult to provide an exact count. The aim of this work is to present a method of estimating the number of people in group scenarios. Several considerations for counting people are illustrated in this paper, and experimental results of the method are described and discussed.


international conference on computer vision | 2011

Positive definite dictionary learning for region covariances

Ravishankar Sivalingam; Daniel Boley; Vassilios Morellas; Nikolaos Papanikolopoulos

Sparse models have proven to be extremely successful in image processing and computer vision, and most efforts have been focused on sparse representation of vectors. The success of sparse modeling and the popularity of region covariances have inspired the development of sparse coding approaches for positive definite matrices. While in earlier work [1], the dictionary was pre-determined, it is clearly advantageous to learn a concise dictionary adaptively from the data at hand. In this paper, we propose a novel approach for dictionary learning over positive definite matrices. The dictionary is learned by alternating minimization between the sparse coding and dictionary update stages, and two different atom update methods are described. The online versions of the dictionary update techniques are also outlined. Experimental results demonstrate that the proposed learning methods yield better dictionaries for positive definite sparse coding. The learned dictionaries are applied to texture and face data, leading to improved classification accuracy and strong detection performance, respectively.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Tensor Sparse Coding for Positive Definite Matrices

Ravishankar Sivalingam; Daniel Boley; Vassilios Morellas; Nikolaos Papanikolopoulos

In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for example, image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.


international conference on robotics and automation | 2011

Dictionary learning for robust background modeling

Ravishankar Sivalingam; Alden D'Souza; Michael E. Bazakos; Roland Miezianko

Background subtraction is a fundamental task in many computer vision applications, such as robotics and automated surveillance systems. The performance of high-level visions tasks such as object detection and tracking is dependent on effective foreground detection techniques. In this paper, we propose a novel background modeling algorithm that represents the background as a linear combination of dictionary atoms and the foreground as a sparse error, when one uses the respective set of dictionary atoms as basis elements to linearly approximate/reconstruct a new image. The dictionary atoms represent variations of the background model, and are learned from the training frames. The sparse foreground estimation during the training and performance phases is formulated as a Lasso [1] problem, while the dictionary update step in the training phase is motivated from the K-SVD algorithm [2]. Our proposed method works well in the presence of foreground in the training frames, and also gives the foreground masks for the training frames as a by-product of the batch training phase. Experimental validation is provided on standard datasets with ground truth information, and the receiver operating characteristic (ROC) curves are shown.


international conference on distributed smart cameras | 2009

Metric learning for semi-supervised clustering of Region Covariance Descriptors

Ravishankar Sivalingam; Vassilios Morellas; Daniel Boley; Nikolaos Papanikolopoulos

In this paper we extend distance metric learning to a new class of descriptors known as Region Covariance Descriptors. Region covariances are becoming increasingly popular as features for object detection and classification over the past few years. Given a set of pairwise constraints by the user, we want to perform semi-supervised clustering of these descriptors aided by metric learning approaches. The covariance descriptors belong to the special class of symmetric positive definite (SPD) tensors, and current algorithms cannot deal with them directly without violating their positive definiteness. In our framework, the distance metric on the manifold of SPD matrices is represented as an L2 distance in a vector space, and a Mahalanobis-type distance metric is learnt in the new space, in order to improve the performance of semi-supervised clustering of region covariances. We present results from clustering of covariance descriptors representing different human images, from single and multiple camera views. This transformation from a set of positive definite tensors to a Euclidean space paves the way for the application of many other vector-space methods to this class of descriptors.


international conference on robotics and automation | 2012

A multi-sensor visual tracking system for behavior monitoring of at-risk children

Ravishankar Sivalingam; Anoop Cherian; Joshua Fasching; Nicholas Walczak; Nathaniel D. Bird; Vassilios Morellas; Barbara Murphy; Kathryn R. Cullen; Kelvin O. Lim; Guillermo Sapiro; Nikolaos Papanikolopoulos

Clinical studies confirm that mental illnesses such as autism, Obsessive Compulsive Disorder (OCD), etc. show behavioral abnormalities even at very young ages; the early diagnosis of which can help steer effective treatments. Most often, the behavior of such at-risk children deviate in very subtle ways from that of a normal child; correct diagnosis of which requires prolonged and continuous monitoring of their activities by a clinician, which is a difficult and time intensive task. As a result, the development of automation tools for assisting in such monitoring activities will be an important step towards effective utilization of the diagnostic resources. In this paper, we approach the problem from a computer vision standpoint, and propose a novel system for the automatic monitoring of the behavior of children in their natural environment through the deployment of multiple non-invasive sensors (cameras and depth sensors). We provide details of our system, together with algorithms for the robust tracking of the activities of the children. Our experiments, conducted in the Shirley G. Moore Laboratory School, demonstrate the effectiveness of our methodology.


intelligent robots and systems | 2013

Recognition of ballet micro-movements for use in choreography

Justin Dancs; Ravishankar Sivalingam; Guruprasad Somasundaram; Vassilios Morellas; Nikolaos Papanikolopoulos

Computer vision as an entire field has a wide and diverse range of applications. The specific application for this project was in the realm of dance, notably ballet and choreography. This project was proof-of-concept for a choreography assistance tool used to recognize and record dance movements demonstrated by a choreographer. Keeping the commercial arena in mind, the Kinect from Microsoft was chosen as the imaging hardware, and a pilot set chosen to verify recognition feasibility. Before implementing a classifier, all training and test data was transformed to a more applicable representation scheme to only pass the important aspects to the classifier to distinguish moves for the pilot set. In addition, several classification algorithms using the Nearest Neighbor (NN) and Support Vector Machine (SVM) methods were tested and compared from a single dictionary as well as on several different subjects. The results were promising given the framework of the project, and several new expansions of this work are proposed.


IEEE Transactions on Intelligent Transportation Systems | 2013

Classification and Counting of Composite Objects in Traffic Scenes Using Global and Local Image Analysis

Guruprasad Somasundaram; Ravishankar Sivalingam; Vassilios Morellas; Nikolaos Papanikolopoulos

Object recognition algorithms often focus on determining the class of a detected object in a scene. Two significant phases are usually involved in object recognition. The first phase is the object representation phase, in which the most suitable features that provide the best discriminative power under constraints such as lighting, resolution, scale, and view variations are chosen to describe the objects. The second phase is to use this representation space to develop models for each object class using discriminative classifiers. In this paper, we focus on composite objects, i.e., objects with two or more simpler classes that are interconnected in a complicated manner. One classic example of such a scenario is a bicyclist. A bicyclist consists of a bicycle and a human who rides the bicycle. When we are faced with the task of classifying bicyclists and pedestrians, it is counterintuitive and often hard to come up with a discriminative classifier to distinguish the two classes. We explore global image analysis based on bag of visual words to compare the results with local image analysis, in which we attempt to distinguish the individual parts of the composite object. We also propose a unified naive Bayes framework and a combined histogram feature method for combining the individual classifiers for enhanced performance.

Collaboration


Dive into the Ravishankar Sivalingam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Boley

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Evan Ribnick

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge