Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramanathan Subramanian is active.

Publication


Featured researches published by Ramanathan Subramanian.


international conference on computer vision | 2013

No Matter Where You Are: Flexible Graph-Guided Multi-task Learning for Multi-view Head Pose Classification under Target Motion

Yan Yan; Elisa Ricci; Ramanathan Subramanian; Oswald Lanz; Nicu Sebe

We propose a novel Multi-Task Learning framework (FEGA-MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. As the target (person) moves, distortions in facial appearance owing to camera perspective and scale severely impede performance of traditional head pose classification methods. FEGA-MTL operates on a dense uniform spatial grid and learns appearance relationships across partitions as well as partition-specific appearance variations for a given head pose to build region-specific classifiers. Guided by two graphs which a-priori model appearance similarity among (i) grid partitions based on camera geometry and (ii) head pose classes, the learner efficiently clusters appearance wise related grid partitions to derive the optimal partitioning. For pose classification, upon determining the targets position using a person tracker, the appropriate region specific classifier is invoked. Experiments confirm that FEGA-MTL achieves state-of-the-art classification with few training data.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

A Multi-Task Learning Framework for Head Pose Estimation under Target Motion

Yan Yan; Elisa Ricci; Ramanathan Subramanian; Gaowen Liu; Oswald Lanz; Nicu Sebe

Recently, head pose estimation (HPE) from low-resolution surveillance data has gained in importance. However, monocular and multi-view HPE approaches still work poorly under target motion, as facial appearance distorts owing to camera perspective and scale changes when a person moves around. To this end, we propose FEGA-MTL, a novel framework based on Multi-Task Learning (MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. Upon partitioning the monitored scene into a dense uniform spatial grid, FEGA-MTL simultaneously clusters grid partitions into regions with similar facial appearance, while learning region-specific head pose classifiers. In the learning phase, guided by two graphs which a-priori model the similarity among (1) grid partitions based on camera geometry and (2) head pose classes, FEGA-MTL derives the optimal scene partitioning and associated pose classifiers. Upon determining the targets position using a person tracker at test time, the corresponding region-specific classifier is invoked for HPE. The FEGA-MTL framework naturally extends to a weakly supervised setting where the targets walking direction is employed as a proxy in lieu of head orientation. Experiments confirm that FEGA-MTL significantly outperforms competing single-task and multi-task learning methods in multi-view settings.


quality of multimedia experience | 2013

Overview of Eye tracking Datasets

Stefan Winkler; Ramanathan Subramanian

Datasets of images or videos annotated with eye tracking data constitute important ground truth for studies on saliency models, which have applications in quality assessment and other areas. Over two dozen such databases are now available in the public domain; they are presented in this paper.


IEEE Transactions on Affective Computing | 2012

Connecting Meeting Behavior with Extraversion—A Systematic Study

Bruno Lepri; Ramanathan Subramanian; Kyriaki Kalimeri; Jacopo Staiano; Fabio Pianesi; Nicu Sebe

This work investigates the suitability of medium-grained meeting behaviors, namely, speaking time and social attention, for automatic classification of the Extraversion personality trait. Experimental results confirm that these behaviors are indeed effective for the automatic detection of Extraversion. The main findings of our study are that: 1) Speaking time and (some forms of) social gaze are effective indicators of Extraversion, 2) classification accuracy is affected by the amount of time for which meeting behavior is observed, 3) independently considering only the attention received by the target from peers is insufficient, and 4) distribution of social attention of peers plays a crucial role.


acm multimedia | 2011

Can computers learn from humans to see better?: inferring scene semantics from viewers' eye movements

Ramanathan Subramanian; Victoria Yanulevskaya; Nicu Sebe

This paper describes an attempt to bridge the semantic gap between computer vision and scene understanding employing eye movements. Even as computer vision algorithms can efficiently detect scene objects, discovering semantic relationships between these objects is as essential for scene understanding. Humans understand complex scenes by rapidly moving their eyes (saccades) to selectively focus on salient entities (fixations). For 110 social scenes, we compared verbal descriptions provided by observers against eye movements recorded during a free-viewing task. Data analysis confirms (i) a strong correlation between task-explicit linguistic descriptions and task-implicit eye movements, both of which are influenced by underlying scene semantics and (ii) the ability of eye movements in the form of fixations and saccades to indicate salient entities and entity relationships mentioned in scene descriptions. We demonstrate how eye movements are useful for inferring the meaning of social (everyday scenes depicting human activities) and affective (emotion-evoking content like expressive faces, nudes) scenes. While saliency has always been studied through the prism of fixations, we show that saccades are particularly useful for (i) distinguishing mild and high-intensity facial expressions and (ii) discovering interactive actions between scene entities.


International Journal of Computer Vision | 2014

Exploring Transfer Learning Approaches for Head Pose Classification from Multi-view Surveillance Images

Anoop Kolar Rajagopal; Ramanathan Subramanian; Elisa Ricci; Radu L. Vieriu; Oswald Lanz; Ramakrishnan Kalpathi; Nicu Sebe

Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces are captured at low-resolution and have a blurred appearance. Domain adaptation approaches are useful for transferring knowledge from the training (source) to the test (target) data when they have different attributes, minimizing target data labeling efforts in the process. This paper examines the use of transfer learning for efficient multi-view head pose classification with minimal target training data under three challenging situations: (i) where the range of head poses in the source and target images is different, (ii) where source images capture a stationary person while target images capture a moving person whose facial appearance varies under motion due to changing perspective, scale and (iii) a combination of (i) and (ii). On the whole, the presented methods represent novel transfer learning solutions employed in the context of multi-view head pose classification. We demonstrate that the proposed solutions considerably outperform the state-of-the-art through extensive experimental validation. Finally, the DPOSE dataset compiled for benchmarking head pose classification performance with moving persons, and to aid behavioral understanding applications is presented in this work.


acm multimedia | 2011

Automatic modeling of personality states in small group interactions

Jacopo Staiano; Bruno Lepri; Ramanathan Subramanian; Nicu Sebe; Fabio Pianesi

In this paper, we target the automatic recognition of personality states in a meeting scenario employing visual and acoustic features. The social psychology literature has coined the name personality state to refer to a specific behavioral episode wherein a person behaves as more or less introvert/extrovert, neurotic or open to experience, etc. Personality traits can then be reconstructed as density distributions over personality states. Different machine learning approaches were used to test the effectiveness of the selected features in modeling the dynamics of personality states.


international conference on multimodal interfaces | 2010

Employing social gaze and speaking activity for automatic determination of the Extraversion trait

Bruno Lepri; Ramanathan Subramanian; Kyriaki Kalimeri; Jacopo Staiano; Fabio Pianesi; Nicu Sebe

In order to predict the Extraversion personality trait, we exploit medium-grained behaviors enacted in group meetings, namely, speaking time and social attention (social gaze). The latter will be further distinguished in to attention given to the group members and attention received from them. The results of our work confirm many of our hypotheses: a) speaking time and (some forms of) social gaze are effective in automatically predicting Extraversion; b) classification accuracy is affected by the size of the time slices used for analysis, and c) to a large extent, the consideration of the social context does not add much to accuracy prediction, with an important exception concerning social gaze.


Journal of Vision | 2014

Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes.

Ramanathan Subramanian; Divya Shankar; Nicu Sebe; David Melcher

A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movements when participants watched neutral and emotional clips from Hollywood-style movies. Participants answered multiple-choice memory questions concerning visual and auditory scene details immediately upon viewing 1-min-long neutral or emotional movie clips. Fixations were more narrowly focused for emotional clips, and immediate memory for object details was worse compared to matched neutral scenes, implying preferential attention to emotional events. Although we found the expected correlation between where people looked and what they remembered for neutral clips, this relationship broke down for emotional clips. When participants were subsequently presented with key frames (static images) extracted from the movie clips such that presentation duration of the target objects (TOs) corresponding to the multiple-choice questions was matched and the earlier questions were repeated, more fixations were observed on the TOs, and memory performance also improved significantly, confirming that emotion modulates the relationship between gaze position and memory performance. Finally, in a long-term memory test, old/new recognition performance was significantly better for emotional scenes as compared to neutral scenes. Overall, these results are consistent with the hypothesis that emotional content draws eye fixations and strengthens memory for the scene gist while weakening encoding of peripheral scene details.


acm multimedia | 2010

Putting the pieces together: multimodal analysis of social attention in meetings

Ramanathan Subramanian; Jacopo Staiano; Kyriaki Kalimeri; Nicu Sebe; Fabio Pianesi

This paper presents a multimodal framework employing eye-gaze, head-pose and speech cues to explain observed social attention patterns in meeting scenes. We first investigate a few hypotheses concerning social attention and characterize meetings and individuals based on ground-truth data. This is followed by replication of ground-truth results through automated estimation of eye-gaze, head-pose and speech activity for each participant. Experimental results show that combining eye-gaze and head-pose estimates decreases error in social attention estimation by over 26%.

Collaboration


Dive into the Ramanathan Subramanian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oswald Lanz

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar

Yan Yan

University of Trento

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. R. Ramakrishnan

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Bruno Lepri

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar

Fabio Pianesi

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge