Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Venkatesh K. Subramanian is active.

Publication


Featured researches published by Venkatesh K. Subramanian.


Pattern Recognition | 2014

Emotion recognition from geometric facial features using self-organizing map

Anima Majumder; Laxmidhar Behera; Venkatesh K. Subramanian

This paper presents a novel emotion recognition model using the system identification approach. A comprehensive data driven model using an extended Kohonen self-organizing map (KSOM) has been developed whose input is a 26 dimensional facial geometric feature vector comprising eye, lip and eyebrow feature points. The analytical face model using this 26 dimensional geometric feature vector has been effectively used to describe the facial changes due to different expressions. This paper thus includes an automated generation scheme of this geometric facial feature vector. The proposed non-heuristic model has been developed using training data from MMI facial expression database. The emotion recognition accuracy of the proposed scheme has been compared with radial basis function network, multi-layered perceptron model and support vector machine based recognition schemes. The experimental results show that the proposed model is very efficient in recognizing six basic emotions while ensuring significant increase in average classification accuracy over radial basis function and multi-layered perceptron. It also shows that the average recognition rate of the proposed method is comparatively better than multi-class support vector machine. HighlightsWe propose an emotion recognition model using system identification.Twenty six dimensional geometric feature vector is extracted using three different algorithms.Classification using an intermediate Kohonen self-organizing map layer.A comparative study with Radial basis function, Multi-layer perceptron and Support vector machine.Efficient recognition results with significant increase in average recognition accuracy over radial basis function and multi-layer perceptron. Marginal improvement over support vector machine.


national conference on communications | 2014

Localized image enhancement

Saumik Bhattacharya; Sumana Gupta; Venkatesh K. Subramanian

Image enhancement is a well established field in image processing. The main objective of image enhancement is to increase the perceptual information contained in an image for better representation using some intermediate steps, like, contrast enhancement, debluring, denoising etc. Among them, contrast enhancement is especially important as human eyes are more sensitive to luminance than the chrominance components of an image. Most of the contrast enhancement algorithms proposed till now are global methods. The major drawback of this global approach is that in practical scenarios, the contrast of an image does not deteriorate uniformly and the outputs of the enhancement techniques reach saturation at proper contrast points. That leads to information loss. In fact, to the best of our knowledge, no non-reference perceptual measure of image quality has yet been proposed to measure localized enhancement. We propose a fast algorithm to increase the contrast of an image locally using singular value decomposition (SVD) approach and attempt to define some parameters which can give clues related to the progress of the enhancement process.


international conference on computer modelling and simulation | 2011

A Novel Approach of Human Motion Tracking with the Mobile Robotic Platform

Meenakshi Gupta; Laxmidher Behera; Venkatesh K. Subramanian

This paper presents a novel and robust approach to detect and follow a human with a mobile robotic platform. In order to follow a human, both the initial detection of human and the subsequent tracking need to be implemented. As the robot is initially static, initial human detection is done using a background subtraction technique. To remove the outliers objects, filters are formulated based on the aspect ratio and horizontal projection histogram of the human. Human detection in subsequent frames is done by back-projecting the color histograms of the human torso and legs. To make the human detection robust, a shape analysis algorithm is developed to find the “two legs apart pattern” in the vertical projection histogram (VPH) of the detected foreground. For tracking, linear motion controllers are proposed: these require visual information to generate motion commands for the robot. The novelty of our approach includes (1) human tracking using visual information alone, (2) use of simple linear motion controllers to generate the translational and rotational velocities for the robot, and (3) cost effectiveness, as the experimental set up requires only one vision sensor. The current version of our system, runs on a Pioneer P3-DX mobile robot, and can follow human at up to 0.7m/s in an indoor environment.


advanced video and signal based surveillance | 2011

Formulation, detection and application of occlusion states (Oc-7) in the context of multiple object tracking

Prithwijit Guha; Amitabha Mukerjee; Venkatesh K. Subramanian

Occlusion is often thought of as a challenge for visual algorithms, specially tracking. Existing literature, however, has identified a number of occlusion categories in the context of tracking in ad hoc manner. We propose a systematic approach to formulate a set of occlusion cases by considering the spatial relations among object support(s) (projections on the image plane) with the detected foreground blob(s), to show that only 7 occlusion states are possible. We designate the resulting qualitative formalism as Oc-7, and show how these occlusion states can be detected and used effectively for the task of multi-object tracking under occlusion of various types. The object support is decomposed into overlapping patches which are tracked independently on the occurrence of occlusions. As a demonstration of the application of these occlusion states, we propose a reasoning scheme for selective tracker execution and object feature updates to track multiple objects in complex environments.


IEEE Systems Journal | 2015

A Robust Visual Human Detection Approach With UKF-Based Motion Tracking for a Mobile Robot

Meenakshi Gupta; Laxmidhar Behera; Venkatesh K. Subramanian; Mo Jamshidi

Robust tracking of a human in a video sequence is an essential prerequisite to an increasing number of applications, where a robot needs to interact with a human user or operates in a human-inhabited environment. This paper presents a robust approach that enables a mobile robot to detect and track a human using an onboard RGB-D sensor. Such robots could be used for security, surveillance, and assistive robotics applications. The proposed approach has real-time computation power through a unique combination of new ideas and well-established techniques. In the proposed method, background subtraction is combined with depth segmentation detector and template matching method to initialize the human tracking automatically. A novel concept of head and hand creation based on depth of interest is introduced in this paper to track the human silhouette in a dynamic environment, when the robot is moving. To make the algorithm robust, a series of detectors (e.g., height, size, and shape) is utilized to distinguish target human from other objects. Because of the relatively high computation time of the silhouette-matching-based method, a confidence level is defined, which allows using the matching-based method only where it is imperative. An unscented Kalman filter is used to predict the human location in the image frame to maintain the continuity of the robot motion. The efficacy of the approach is demonstrated through a real experiment on a mobile robot navigating in an indoor environment.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

Automatic Facial Expression Recognition System Using Deep Network-Based Data Fusion

Anima Majumder; Laxmidhar Behera; Venkatesh K. Subramanian

This paper presents a novel automatic facial expressions recognition system (AFERS) using the deep network framework. The proposed AFERS consists of four steps: 1) geometric features extraction; 2) regional local binary pattern (LBP) features extraction; 3) fusion of both the features using autoencoders; and 4) classification using Kohonen self-organizing map (SOM)-based classifier. This paper makes three distinct contributions. The proposed deep network consisting of autoencoders and the SOM-based classifier is computationally more efficient and performance wise more accurate. The fusion of geometric features with LBP features using autoencoders provides better representation of facial expression. The SOM-based classifier proposed in this paper has been improved by making use of a soft-threshold logic and a better learning algorithm. The performance of the proposed approach is validated on two widely used databases (DBs): 1) MMI and 2) extended Cohn–Kanade (CK+). An average recognition accuracy of 97.55% in MMI DB and 98.95% in CK+ DB are obtained using the proposed algorithm. The recognition results obtained from fused features are found to be distinctly superior to both recognition using individual features as well as recognition with a direct concatenation of the individual feature vectors. Simulation results validate that the proposed AFERS is more efficient as compared to the existing approaches.


systems man and cybernetics | 2017

A Novel Vision-Based Tracking Algorithm for a Human-Following Mobile Robot

Meenakshi Gupta; Swagat Kumar; Laxmidhar Behera; Venkatesh K. Subramanian

The ability to follow a human is an important requirement for a service robot designed to work along side humans in homes or in work places. This paper describes the development and implementation of a novel robust visual controller for the human-following robot. This visual controller consists of two parts: 1) a robust algorithm that tracks a human visible in its camera view and 2) a servo controller that generates necessary motion commands so that the robot can follow the target human. The tracking algorithm uses point-based features, like speeded up robust feature, to detect human under challenging conditions, such as, variation in illumination, pose change, full or partial occlusion, and abrupt camera motion. The novel contributions in the tracking algorithm include the following: 1) a dynamic object model that evolves over time to deal with short-term changes, while maintaining stability over long run; 2) an online K-D tree-based classifier along with a Kalman filter is used to differentiate a case of pose change from a case of partial or full occlusion; and 3) a method is proposed to detect pose change due to out-of-plane rotations, which is a difficult problem that leads to frequent tracking failures in a human following robot. An improved version of a visual servo controller is proposed that uses feedback linearization to overcome the chattering phenomenon present in sliding mode-based controllers used previously. The efficacy of the proposed approach is demonstrated through various simulations and real-life experiments with an actual mobile robot platform.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

Perceptual Video Summarization—A New Framework for Video Summarization

Sinnu Susan Thomas; Sumana Gupta; Venkatesh K. Subramanian

The enormous growth of video content in recent times has raised the need to abbreviate the content for human consumption. Thus, there is a need for summaries of a quality that meets the requirements of human users. This also means that the summarization must incorporate the peculiar features of human perception. We present a new framework for video summarization in this paper. Unlike many available summarization algorithms that utilize only statistical redundancy, we introduce for the first time the features of the human visual system within the summarization framework itself to allow for the emphasis of perceptually significant events while simultaneously eliminating perceptual redundancy from the summaries. The subjective and objective evaluation scores have evaluated the framework.


Journal of Visual Communication and Image Representation | 2016

Perceptual synoptic view of pixel, object and semantic based attributes of video

Sinnu Susan Thomas; Sumana Gupta; Venkatesh K. Subramanian

Display Omitted The need for various object and semantics level attributes is described in detail.Attention model based on object and semantics level attributes is proposed.Key frames are selected based on the proposed model.Key frames are fused to give perceptual synopsis. For a scene, what are the object and semantic based attributes, other than the pixel based attributes, and how do they affect our attentional selection are some of the questions we need to address. We studied the effects of various attributes on our attentional perspective. We described a new saliency prediction model that accounts for different pixel-level attributes as color, contrast and intensity; object level attributes such as size, shape of objects and semantic level attributes as motion and speed of objects. We quantified these attributes based on motion contrast, motion energy and motion chromism. With this in view, we examined the problem of information prioritizing and filtering with emphasis on directing this exercise using object and semantic based attributes of the human attention model. We have evaluated proposed approach on different types of videos for their quantitative and qualitative comparison. The promising results create a gateway for synopsis view.


international symposium on neural networks | 2014

Local binary pattern based facial expression recognition using Self-organizing Map

Anima Majumder; Laxmidhar Behera; Venkatesh K. Subramanian

This paper presents an appearance feature based facial expression recognition system using Kohonen Self-Organizing Map (KSOM). Appearance features are extracted using uniform Local binary patterns (LBPs) from equally sub-divided blocks applied over face image. The dimensionality of the LBP feature vector is further reduced using principal component analysis (PCA) to remove the redundant data that leads to unnecessary computation cost. Using our proposed KSOM based classification approach, we train only 59 dimensional LBP features extracted from whole facial region. The classifier is designed to categorize six basic facial expressions (happiness, sadness, disgust, anger, surprise and fear). To validate the performance of the reduced 59 dimensional LBP feature vector, we also train the original data of dimension 944 using the KSOM. The results demonstrates, that with marginal degradation in overall recognition performance, the reduced 59 dimensional data obtains very good classification results. The paper also presents three more comparative studies based on widely used classifiers like; Support vector machine (SVM), Radial basis functions network (RBFN) and Multi-layer perceptron (MLP3). Our KSOM based approach outperforms all other classification methods with average recognition accuracy 69.18%. Whereas, the average recognition rated obtained by SVM, RBFN and MLP3 are 65.78%, 68.09% and 62.73% respectively.

Collaboration


Dive into the Venkatesh K. Subramanian's collaboration.

Top Co-Authors

Avatar

Laxmidhar Behera

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Anima Majumder

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Sumana Gupta

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Sinnu Susan Thomas

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Meenakshi Gupta

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Amit Mitra

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Disha Prakash

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Laxmidher Behera

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Rajesh Bhatt

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

A. Srinivas

Defence Metallurgical Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge