Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karthik Mahesh Varadarajan is active.

Publication


Featured researches published by Karthik Mahesh Varadarajan.


international conference on advanced robotics | 2011

Object part segmentation and classification in range images for grasping

Karthik Mahesh Varadarajan; Markus Vincze

Recognition by Components (RBC) has been one of the most conceptually significant frameworks for modeling human visual object recognition. Extension of the model to practical robotic applications have been traditionally limited by the lack of good response in textureless areas in the case of conventional inexpensive stereo cameras as well as by the need for expensive laser based sensor systems to compensate for this deficiency. The recent availability of RGB-D sensors such as the PrimeSense sensor has opened new avenues for practical usage of these sensors for robotic applications such as grasping. In this paper, we present novel algorithms for segmentation of objects and parts from range images with extensions based on semantic cues to yield robust part detection. The detected parts are then parameterized using a superquadric based fitting framework and classified into one of different generic shapes. The categorization of the parts enables rules for grasping the object. This Grasping by Components (GBC) scheme is a natural extension of the RBC framework and provides a scalable framework for grasping of objects. This scheme also permits the grasping of novel objects in the scene, with at least one known grasp affordance. Results of the range segmentation are compared with another scene agnostic graph based segmentation approach.


intelligent robots and systems | 2012

AfRob: The affordance network ontology for robots

Karthik Mahesh Varadarajan; Markus Vincze

AfNet, The Affordance Network is an open affordance computing initiative that provides affordance knowledge ontologies for common household articles in terms of affordance features using surface forms termed as afbits (affordance bits). AfNet currently offers 68 base affordance features (25 structural, 10 material, 33 grasp), providing over 200 object category definitions in terms of 4000 afbits. Symbol grounding algorithms for these affordance features enable recognition of objects in visual (RGB-D) data. While AfNet is built as a generic visual knowledge ontology for recognition, it is well suited for deployment on domestic robots. In this paper, we describe AfRob, an extension of AfNet for robotic applications. AfRob builds upon AfNet by imbibing semantic context and mapping for holistic recognition and manipulation of objects in domestic environments. AfRob also offers modules to enable robots to interact and grasp objects through the generation of grasp affordances. The paper also details the inference mechanisms that adapt AfNet for robots in domestic contexts. Results demonstrate the efficiency of the affordance driven approach to holistic visual processing.


international conference on robotics and automation | 2014

Attention-driven object detection and segmentation of cluttered table scenes using 2.5D symmetry

Ekaterina Potapova; Karthik Mahesh Varadarajan; Andreas Richtsfeld; Michael Zillich; Markus Vincze

The task of searching and grasping objects in cluttered scenes, typical of robotic applications in domestic environments requires fast object detection and segmentation. Attentional mechanisms provide a means to detect and prioritize processing of objects of interest. In this work, we combine a saliency operator based on symmetry with a segmentation method based on clustering locally planar surface patches, both operating on 2.5D point clouds (RGB-D images) as input data to yield a novel approach to table-top scene segmentation. Evaluation on indoor table-top scenes containing man-made objects clustered in piles and dumped in a box show that our approach to selection of attention points significantly improves performance of state-of-the-art attention-based segmentation methods.


international conference on intelligent transportation systems | 2012

Hybridization of appearance and symmetry for vehicle-logo localization

Kai Zhou; Karthik Mahesh Varadarajan; Markus Vincze; Fuqiang Liu

Vehicle logo detection is of great importance in Intelligent Transportation System (ITS) applications since it augments the traditional License Plate Recognition (LPR) based vehicle identification solutions. An algorithm based on the combination of feature appearance and symmetry is proposed to locate the vehicle logo in a digital image. The air-intake grille, as the most recognizable visual structure in the front view of the vehicle, can be efficiently detected and recognized thereby providing information about relative position of vehicle logos. Symmetry, a widely used feature for analyzing vehicle front-view images, has also been taken into consideration along with grille detection in this paper. This hybrid scheme facilitates more accurate vehicle logo detection rate in comparison with using isolated symmetry (which performs poorly on scenes with complex background) or edge statistics (which cannot handle non/small-grille vehicles). Experimental results on a large number of images validate the robustness and efficiency of the proposed algorithm.


international conference on image processing | 2010

Real-time depth diffusion for 3D surface reconstruction

Karthik Mahesh Varadarajan; Markus Vincze

Range data obtained from conventional stereo-cameras employing dense stereo matching algorithms typically contain a high amount of noise, especially under poor illumination conditions. Furthermore, lack of reliable depth estimates in low-texture regions can result in poor 3D surface reconstruction. Anisotropic diffusion algorithms have been used recently in stereo matching, depth estimation and 3D surface reconstruction. However, these algorithms typically have long execution times, preventing real-time operation on resource constrained systems and robots. Moreover, most of these techniques suffer from excessive smoothing at depth discontinuities resulting in loss of structure, especially in areas where the 2D image does not provide structural cues to guide the depth diffusion. These algorithms are also unsuitable for diffusion of extremely sparse depth data such as in the case of homogenous surfaces. This paper addresses these issues by novel denoising and diffusion techniques. The results presented demonstrate the run-time efficiency and fidelity of reconstructed depth surfaces.


international conference on computer vision systems | 2011

Knowledge representation and inference for grasp affordances

Karthik Mahesh Varadarajan; Markus Vincze

Knowledge bases for semantic scene understanding and processing form indispensable components of holistic intelligent computer vision and robotic systems. Specifically, task based grasping requires the use of perception modules that are tied with knowledge representation systems in order to provide optimal solutions. However, most state-of-the-art systems for robotic grasping, such as the K- CoPMan, which uses semantic information in mapping and planning for grasping, depend on explicit 3D model representations, restricting scalability. Moreover, these systems lacks conceptual knowledge that can aid the perception module in identifying the best objects in the field of view for task based manipulation through implicit cognitive processing. This restricts the scalability, extensibility, usability and versatility of the system. In this paper, we utilize the concept of functional and geometric part affordances to build a holistic knowledge representation and inference framework in order to aid task based grasping. The performance of the system is evaluated based on complex scenes and indirect queries.


international conference on computer vision | 2011

Surface reconstruction for RGB-D data using real-time depth propagation

Karthik Mahesh Varadarajan; Markus Vincze

Real-time noise removal and depth propagation is a crucial component for surface reconstruction algorithms. Given the recent surge in the development of RGB-D sensors, a host of methods are available for detecting and tracking RGB-D features across multiple frames as well combining these frames to yield dense 3D point clouds. Nevertheless the sensor outputs are sparse in areas where textures are low (for traditional stereo cameras) and high reflectance regions (for Kinect like active sensors). It is crucial to employ a depth estimate propagation or diffusion algorithm to generate best approximation surface curvature in these regions for visualization. In this paper, we extend the Depth Diffusion using Iterative Back Substitution scheme to Kinect like RGB-D sensor data for real time surface reconstruction.


2011 5th International Symposium on Computational Intelligence and Intelligent Informatics (ISCIII) | 2011

Augmented virtuality based immersive telepresence for control of mining robots

Karthik Mahesh Varadarajan; Markus Vincze

Vast mineral resources of precious metals such as gold remain trapped and unexploited due to the lack of economical and practical means of exploration. This requires the development of alternate exploitation techniques. Mining robots form a significant alternative to convention mining techniques. However, there are several practical limitations that make such systems difficult to implement in practice. The primary hurdle in realizing such systems is the difficulty in tele-operating the robot under high latency conditions, which is typical of mining of environments. This is further compounded by poor representation of the environment, resulting in reduced situational awareness. The latency in tele-operation can be caused by numerous factors — system latency, compression scheme, communication protocols, constraints on bandwidth, channel contention, poor line of sight and display overhead. This is typically countered by reduction of frame rate or display resolution or quality. This further affects remote navigation of the robot. Non-holistic scene displays further degrade situational perception. This is intricately tied to the effectiveness of the Operator Control Unit (OCU). Besides, improvements in these capabilities without any vehicle intelligence do little in reducing the operator task-load. In this paper, we present the design of a novel augmented virtuality based visualization and operator interface unit along supported by vehicular intelligence, which are targeted at overcoming the above issues. These design considerations and presented algorithms are expected to form the foundation of next generation mining robots.


intelligent robots and systems | 2010

3D room modeling and doorway detection from indoor stereo imagery using feature guided piecewise depth diffusion

Karthik Mahesh Varadarajan; Markus Vincze

Traditional indoor 3D structural environment modeling algorithms employ schemes such as clustering of dense point clouds for parameterization and identification of the 3D surfaces. RANSAC based plane fitting is one common approach in this regard. Alternatively, extensions to feature based stereo have also been used, mainly focusing on 3D line descriptions, along with techniques such as half-plane detection, real-plane or facade reconstruction, plane sweeping etc. Noise in the range data, especially in low texture regions, accidental line/plane grouping under lack of cues for visibility tests, presence of depth edges or discontinuities that are not visible in the 2D image and difficulties in adaptively estimating metrics for clustering can hamper efficiency of practical systems. In order to counter these issues, we propose a novel framework fusing 2D local and global features such as edges, texture and regions, with geometry information obtained from range data for reliable 3D indoor scene representation. The strength of the approach is derived from the novel depth diffusion and segmentation algorithms resulting in superior surface characterization as opposed to traditional feature based stereo or RANSAC based plane fitting approaches. These algorithms have also been heavily optimized to enable real-time deployments on personal, domestic and rehabilitation robots.


international conference on computer vision systems | 2013

Parallel deep learning with suggestive activation for object category recognition

Karthik Mahesh Varadarajan; Markus Vincze

The performance of visual perception algorithms for object category detection has largely been restricted by the lack of generalizability and scalability of state-of-art hand-crafted feature detectors and descriptors across instances of objects with different shapes, textures etc. The recently introduced deep learning algorithms have attempted at overcoming this limitation through automatic learning of feature kernels. Nevertheless, conventional deep learning architectures are uni-modal, essentially feedforward testing pipelines working on image space with little regard for context and semantics. In this paper, we address this issue by presenting a new framework for object categorization based on Deep Learning, called Parallel Deep Learning with Suggestive Activation (PDLSA) that imbibes several brain operating principles drawn from neuroscience and psychophysical studies. In particular, we focus on Suggestive Activation --- a schema which enables feedback loops in the recognition process that use information obtained from partial detection results to generate hypotheses based on long-term memory (or knowledge base) to search in the image space for features corresponding to these hypotheses thereby enabling activation of the response corresponding to the correct object category through multi-modal integration. Results presented against a traditional SIFT based category classifier on the University of Washington benchmark RGB-D dataset demonstrates the validity of the approach.

Collaboration


Dive into the Karthik Mahesh Varadarajan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Zhou

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Zillich

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Richtsfeld

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ekaterina Potapova

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christina Pahl

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Einramhof

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Schwarz

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge