Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abhilash Srikantha is active.

Publication


Featured researches published by Abhilash Srikantha.


International Journal of Computer Vision | 2016

Capturing Hands in Action Using Discriminative Salient Points and Physics Simulation

Dimitrios Tzionas; Luca Ballan; Abhilash Srikantha; Marc Pollefeys; Juergen Gall

Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.


german conference on pattern recognition | 2014

Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points

Dimitrios Tzionas; Abhilash Srikantha; Juergen Gall

Hand motion capture has been an active research topic, following the success of full-body pose tracking. Despite similarities, hand tracking proves to be more challenging, characterized by a higher dimensionality, severe occlusions and self-similarity between fingers. For this reason, most approaches rely on strong assumptions, like hands in isolation or expensive multi-camera systems, that limit practical use. In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera. Our approach combines a generative model with collision detection and discriminatively learned salient points. We quantitatively evaluate our approach on 14 new sequences with challenging interactions.


european conference on computer vision | 2014

Discovering Object Classes from Activities

Abhilash Srikantha; Juergen Gall

In order to avoid an expensive manual labelling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visually similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in such videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.


Proceedings of SPIE | 2013

Symmetry-based detection and diagnosis of DCIS in breast MRI

Abhilash Srikantha; Markus Thorsten Harz; Gillian M. Newstead; Lei Wang; Bram Platel; Katrin Hegenscheid; Ritse M. Mann; Horst K. Hahn; Heinz-Otto Peitgen

The delineation and diagnosis of non-mass-like lesions, most notably DCIS (ductal carcinoma in situ), is among the most challenging tasks in breast MRI reading. Even for human observers, DCIS is not always easy to diferentiate from patterns of active parenchymal enhancement or from benign alterations of breast tissue. In this light, it is no surprise that CADe/CADx approaches often completely fail to classify DCIS. Of the several approaches that have tried to devise such computer aid, none achieve performances similar to mass detection and classification in terms of sensitivity and specificity. In our contribution, we show a novel approach to combine a newly proposed metric of anatomical breast symmetry calculated on subtraction images of dynamic contrast-enhanced (DCE) breast MRI, descriptive kinetic parameters, and lesion candidate morphology to achieve performances comparable to computer-aided methods used for masses. We have based the development of the method on DCE MRI data of 18 DCIS cases with hand-annotated lesions, complemented by DCE-MRI data of nine normal cases. We propose a novel metric to quantify the symmetry of contralateral breasts and derive a strong indicator for potentially malignant changes from this metric. Also, we propose a novel metric for the orientation of a finding towards a fix point (the nipple). Our combined scheme then achieves a sensitivity of 89% with a specificity of 78%, matching CAD results for breast MRI on masses. The processing pipeline is intended to run on a CAD server, hence we designed all processing to be automated and free of per-case parameters. We expect that the detection results of our proposed non-mass aimed algorithm will complement other CAD algorithms, or ideally be joined with them in a voting scheme.


international conference on image processing | 2014

Hough-based object detection with grouped features

Abhilash Srikantha; Juergen Gall

Hough-based voting approaches have been successfully applied to object detection. While these methods can be efficiently implemented by random forests, they estimate the probability for an object hypothesis independently for each feature. In this work, we address this problem by grouping features in a local neighborhood to obtain a better estimate of the probability. To this end, we propose oblique classification-regression forests that combine features of different trees. We further investigate the benefit of combining independent and grouped features and evaluate the approach on RGB and RGB-D datasets.


computer vision and pattern recognition | 2017

Weakly Supervised Affordance Detection

Johann Sawatzky; Abhilash Srikantha; Juergen Gall

Localizing functional regions of objects or affordances is an important aspect of scene understanding and relevant for many robotics applications. In this work, we introduce a pixel-wise annotated affordance dataset of 3090 images containing 9916 object instances. Since parts of an object can have multiple affordances, we address this by a convolutional neural network for multilabel affordance segmentation. We also propose an approach to train the network from very few keypoint annotations. Our approach achieves a higher affordance detection accuracy than other weakly supervised methods that also rely on keypoint annotations or image annotations as weak supervision.


german conference on pattern recognition | 2013

Symmetry-Based Detection and Diagnosis of DCIS in Breast MRI

Abhilash Srikantha

Detecting early stage breast cancers like Ductal Carcinoma In Situ (DCIS) is important, as it supports effective and minimally invasive treatments. Although Computer Aided Detection/Diagnosis (CADe/ CADx) systems have been successfully employed for highly malignant carcinomas, their performance on DCIS is inadequate. In this context, we propose a novel approach combining symmetry, kinetics and morphology that achieves superior performance. We base our work on contrast enhanced data of 18 pure DCIS cases with hand annotated lesions and 9 purely normal cases. The overall sensitivity and specificity of the system stood at 89% each.


Proceedings of SPIE | 2012

A shape-based statistical method to retrieve 2D TRUS-MR slice correspondence for prostate biopsy

Jhimli Mitra; Abhilash Srikantha; Désiré Sidibé; Robert Martí; Arnau Oliver; Xavier Lladó; Soumya Ghose; Joan C. Vilanova; Josep Comet; Fabrice Meriaudeau

This paper presents a method based on shape-context and statistical measures to match interventional 2D Trans Rectal Ultrasound (TRUS) slice during prostate biopsy to a 2D Magnetic Resonance (MR) slice of a pre-acquired prostate volume. Accurate biopsy tissue sampling requires translation of the MR slice information on the TRUS guided biopsy slice. However, this translation or fusion requires the knowledge of the spatial position of the TRUS slice and this is only possible with the use of an electro-magnetic (EM) tracker attached to the TRUS probe. Since, the use of EM tracker is not common in clinical practice and 3D TRUS is not used during biopsy, we propose to perform an analysis based on shape and information theory to reach close enough to the actual MR slice as validated by experts. The Bhattacharyya distance is used to find point correspondences between shape-context representations of the prostate contours. Thereafter, Chi-square distance is used to find out those MR slices where the prostates closely match with that of the TRUS slice. Normalized Mutual Information (NMI) values of the TRUS slice with each of the axial MR slices are computed after rigid alignment and consecutively a strategic elimination based on a set of rules between the Chi-square distances and the NMI leads to the required MR slice. We validated our method for TRUS axial slices of 15 patients, of which 11 results matched at least one experts validation and the remaining 4 are at most one slice away from the expert validations.


Computer Vision and Image Understanding | 2017

Weak supervision for detecting object classes from activities

Abhilash Srikantha; Juergen Gall

The problem of detecting objects from weakly labeled activity videos is addressed.Multiple object instances from each video are inferred using a greedy approach.Combining object appearance with its functionality greatly improves performance.Object detection performance comparable to a fully supervised approach is achieved. Weakly supervised learning for object detection has been gaining significant attention in the recent past. Visually similar objects are extracted automatically from weakly labeled videos hence bypassing the tedious process of manually annotating training data. However, the problem as applied to small or medium sized objects is still largely unexplored. Our observation is that weakly labeled information can be derived from videos involving human-object interactions. Since the object is characterized neither by its appearance nor its motion in such videos, we propose a robust framework that taps valuable human context and models similarity of objects based on appearance and functionality. Furthermore, the framework is designed such that it maximizes the utility of the data by detecting possibly multiple instances of an object from each video. We show that object models trained in this fashion perform between 86% and 92% of their fully supervised counterparts on three challenging RGB and RGB-D datasets.


british machine vision conference | 2015

Human Pose as Context for Object Detection.

Abhilash Srikantha; Juergen Gall

Detecting small objects in images is a challenging problem particularly when they are often occluded by hands or other body parts. Recently, joint modelling of human pose and objects has been proposed to improve both pose estimation as well as object detection. These approaches, however, focus on explicit interaction with an object and lack the flexibility to combine both modalities when interaction is not obvious. We therefore propose to use human pose as an additional context information for object detection. To this end, we represent an object category by a binary star model and train regression forests that localize parts of an object for each modality separately. Predictions of the two modalities are then combined to detect the bounding box of the object. We evaluate our approach on three challenging datasets which vary in the amount of object interactions and the quality of automatically extracted human poses.

Collaboration


Dive into the Abhilash Srikantha's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Horst K. Hahn

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bram Platel

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Ritse M. Mann

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge