Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guruprasad Somasundaram is active.

Publication


Featured researches published by Guruprasad Somasundaram.


Journal of Intelligent and Robotic Systems | 2008

Multi-Camera Human Activity Monitoring

Loren Fiore; Duc Fehr; Robert Bodor; Andrew Drenner; Guruprasad Somasundaram; Nikolaos Papanikolopoulos

With the proliferation of security cameras, the approach taken to monitoring and placement of these cameras is critical. This paper presents original work in the area of multiple camera human activity monitoring. First, a system is presented that tracks pedestrians across a scene of interest and recognizes a set of human activities. Next, a framework is developed for the placement of multiple cameras to observe a scene. This framework was originally used in a limited X, Y, pan formulation but is extended to include height (Z) and tilt. Finally, an active dual-camera system for task recognition at multiple resolutions is developed and tested. All of these systems are tested under real-world conditions, and are shown to produce usable results.


Computer Vision and Image Understanding | 2014

Action recognition using global spatio-temporal features derived from sparse representations

Guruprasad Somasundaram; Anoop Cherian; Vassilios Morellas; Nikolaos Papanikolopoulos

Abstract Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal self-similarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, and HOF), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.


international conference on intelligent transportation systems | 2009

Counting pedestrians and bicycles in traffic scenes

Guruprasad Somasundaram; Vassilios Morellas; Nikolaos Papanikolopoulos

Object detection and classification have received increased attention recently from computer vision and image processing researchers. Image processing views this problem at a much lower level as compared to machine learning and linear algebraic analysis which focus on the overall statistics of object classes given sufficient data. A good algorithm uses both these approaches to its advantage. It is important to define and choose the features of an image suitably, so that the classification algorithm can perform at its best in distinguishing object classes. In this paper we investigate the performance of different types of texture-based features when used with a support vector machine. Their performance was evaluated on standardized image datasets and compared. The objective of this study was to come up with a suitable algorithm to distinguish bicycles from pedestrians in locations such as bicycle paths and trails in order to estimate their traffic. The models developed during this study were applied in practice to traffic videos and the results are summarized here. For better application in practice other cues derived from motion were utilized to improve the performance of the classification and hence the accuracy of the counts.


international conference on robotics and automation | 2008

Optimal camera placement with adaptation to dynamic scenes

Loren Fiore; Guruprasad Somasundaram; Andrew Drenner; Nikolaos Papanikolopoulos

The use of cameras is becoming more prevalent by the day owing to the variety of applications they have. However, each application requires a specific placement of the cameras for best performance. Therefore, determining this placement has been a problem of much work in the field of computer vision. However, most of the current approaches deal with a scene that does not change once the cameras have been placed. In this paper a system is developed and successfully tested that can not only distribute cameras around a scene such that the observability is improved, but can reorganize the cameras if the pattern of activity within a scene changes with time. The cameras can be distributed in either two or three dimensional space, and this was tested with the use of mobile robotic cameras and stationary cameras.


intelligent robots and systems | 2013

Recognition of ballet micro-movements for use in choreography

Justin Dancs; Ravishankar Sivalingam; Guruprasad Somasundaram; Vassilios Morellas; Nikolaos Papanikolopoulos

Computer vision as an entire field has a wide and diverse range of applications. The specific application for this project was in the realm of dance, notably ballet and choreography. This project was proof-of-concept for a choreography assistance tool used to recognize and record dance movements demonstrated by a choreographer. Keeping the commercial arena in mind, the Kinect from Microsoft was chosen as the imaging hardware, and a pilot set chosen to verify recognition feasibility. Before implementing a classifier, all training and test data was transformed to a more applicable representation scheme to only pass the important aspects to the classifier to distinguish moves for the pilot set. In addition, several classification algorithms using the Nearest Neighbor (NN) and Support Vector Machine (SVM) methods were tested and compared from a single dictionary as well as on several different subjects. The results were promising given the framework of the project, and several new expansions of this work are proposed.


IEEE Transactions on Intelligent Transportation Systems | 2013

Classification and Counting of Composite Objects in Traffic Scenes Using Global and Local Image Analysis

Guruprasad Somasundaram; Ravishankar Sivalingam; Vassilios Morellas; Nikolaos Papanikolopoulos

Object recognition algorithms often focus on determining the class of a detected object in a scene. Two significant phases are usually involved in object recognition. The first phase is the object representation phase, in which the most suitable features that provide the best discriminative power under constraints such as lighting, resolution, scale, and view variations are chosen to describe the objects. The second phase is to use this representation space to develop models for each object class using discriminative classifiers. In this paper, we focus on composite objects, i.e., objects with two or more simpler classes that are interconnected in a complicated manner. One classic example of such a scenario is a bicyclist. A bicyclist consists of a bicycle and a human who rides the bicycle. When we are faced with the task of classifying bicyclists and pedestrians, it is counterintuitive and often hard to come up with a discriminative classifier to distinguish the two classes. We explore global image analysis based on bag of visual words to compare the results with local image analysis, in which we attempt to distinguish the individual parts of the composite object. We also propose a unified naive Bayes framework and a combined histogram feature method for combining the individual classifiers for enhanced performance.


international conference on distributed smart cameras | 2010

Dictionary learning based object detection and counting in traffic scenes

Ravishankar Sivalingam; Guruprasad Somasundaram; Vassilios Morellas; Nikolaos Papanikolopoulos; Osama A. Lotfallah; Youngchoon Park

The objective of object recognition algorithms in computer vision is to quantify the presence or absence of a certain class of objects, for e.g.: bicycles, cars, people, etc. which is highly useful in traffic estimation applications. Sparse signal models and dictionary learning techniques can be utilized to not only classify images as belonging to one class or another, but also to detect the case when two or more of these classes co-occur with the help of augmented dictionaries. We present results comparing the classification accuracy when different image classes occur together. Practical scenarios where such an approach can be applied include forms of intrusion detection i.e., where an object of class B should not co-occur with objects of class A. An example is when there are bicyclists riding on prohibited sidewalks, or a person is trespassing a hazardous area. Mixed class detection in terms of determining semantic content can be performed in a global manner on downscaled versions of images or thumbnails. However to accurately classify an image as belonging to one class or the other, we resort to higher resolution images and localized content examination. With the help of blob tracking we can use this classification method to count objects in traffic videos. The method of feature extraction illustrated in this paper is highly suited to images obtained in practical cases, which are usually of poor quality and lack enough texture for the popular gradient based methods to produce adequate feature points. We demonstrate that by training different types of dictionaries appropriately, we can perform various tasks required for traffic monitoring.


international conference on robotics and automation | 2012

Sparse representation of point trajectories for action classification

Ravishankar Sivalingam; Guruprasad Somasundaram; Vineet Bhatawadekar; Vassilios Morellas; Nikolaos Papanikolopoulos

Action classification is an important component of human-computer interaction. Trajectory classification is an effective way of performing action recognition with significant success reported in the literature. We compare two different representation schemes, raw multivariate time-series data and the covariance descriptors of the trajectories, and apply sparse representation techniques for classifying the various actions. The features are sparse coded using the Orthogonal Matching Pursuit algorithm, and the gestures and actions are classified based on the reconstruction residuals. We demonstrate the performance of our approach on standardized datasets such as the Australian Sign Language (AusLan) and UCF Motion Capture datasets, collected using high-quality motion capture systems, as well as motion capture data obtained from a Microsoft Kinect sensor.


mediterranean conference on control and automation | 2012

Object classification in traffic scenes using multiple spatio-temporal features

Guruprasad Somasundaram; Vassilios Morellas; Nikolaos Papanikolopoulos; Saad J. Bedros

Object classification is a widely researched area in the field of computer vision. Lately there has been a lot of attention to appearance based models for representing objects. The most important feature of classifying objects such as pedestrians, vehicles, etc. in traffic scenes is that we have motion information available to us. The motion information presents itself in the form of temporal cues such as velocity and also as spatio-temporal cues such as optical flow, DHOG [6], etc. We propose a novel spatio-temporal feature based on covariance descriptors known as DCOV which captures complementary information to the DHOG feature. We present a combined classifier based on properties of tracked objects along with the DHOG and the DCOV features. We show based on experiments on the PETS 2001 dataset and two video sequences of bicycle and pedestrian traffic that the proposed classifier provides competent performance for distinguishing pedestrians, vehicles and bicyclists. Our method is also adaptive and benefits from the availability of more data for training. The classifier is also developed with real-time feasibility in mind.


international conference on image processing | 2012

Object classification with efficient global self-similarity descriptors based on sparse representations

Guruprasad Somasundaram; Vassilios Morellas; Nikolaos Papanikolopoulos

Object recognition entails extracting information about which object class(es) are present in an image. In order to enhance the performance of object recognition, reducing the redundancy in the data is absolutely essential. Prior literature [1, 2] introduced local and global self-similarity features to highlight the areas in an image which are useful for object classification and detection. We introduce an efficient self-similarity measure based on sparse representations and propose two different descriptors. Our measure of self-similarity is determined across multiple scales and is more efficient than prior work. We test our self similarity descriptor using support vector machine based classification on the PASCAL VOC 2007 database consisting of 20 object classes. Comparative results indicate performance competitive with the prior approaches of computing self-similarity descriptors.

Collaboration


Dive into the Guruprasad Somasundaram's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evan Ribnick

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loren Fiore

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duc Fehr

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge