Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dadhichi Shukla is active.

Publication


Featured researches published by Dadhichi Shukla.


digital image computing techniques and applications | 2015

Probabilistic Detection of Pointing Directions for Human-Robot Interaction

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Deictic gestures - pointing at things in human-human collaborative tasks - constitute a pervasive, non-verbal way of communication, used e.g. to direct attention towards objects of interest. In a human-robot interactive scenario, in order to delegate tasks from a human to a robot, one of the key requirements is to recognize and estimate the pose of the pointing gesture. Standard approaches rely on full-body or partial-body postures to detect the pointing direction. We present a probabilistic, appearance-based object detection framework to detect pointing gestures and robustly estimate the pointing direction. Our method estimates the pointing direction without assuming any human kinematic model. We propose a functional model for pointing which incorporates two types of pointing, finger pointing and tool pointing using an object in hand. We evaluate our method on a new dataset with 9 participants pointing at 10 objects.


european conference on computer vision | 2016

Integration of Probabilistic Pose Estimates from Multiple Views

Özgür Erkent; Dadhichi Shukla; Justus H. Piater

We propose an approach to multi-view object detection and pose estimation that considers combinations of single-view estimates. It can be used with most existing single-view pose estimation systems, and can produce improved results even if the individual pose estimates are incoherent. The method is introduced in the context of an existing, probabilistic, view-based detection and pose estimation method (PAPE), which we here extend to incorporate diverse attributes of the scene. We tested the multiview approach with RGB-D cameras in different environments containing several cluttered test scenes and various textured and textureless objects. The results show that the accuracies of object detection and pose estimation increase significantly over single-view PAPE and over other multiple-view integration methods.


international conference on social robotics | 2015

The Effects of Social Gaze in Human-Robot Collaborative Assembly

Kerstin Fischer; Lars Christian Jensen; Franziska Kirstein; Sebastian Stabinger; Özgür Erkent; Dadhichi Shukla; Justus H. Piater

In this paper we explore how social gaze in an assembly robot affects how naive users interact with it. In a controlled experimental study, 30 participants instructed an industrial robot to fetch parts needed to assemble a wooden toolbox. Participants either interacted with a robot employing a simple gaze following the movements of its own arm, or with a robot that follows its own movements during tasks, but which also gazes at the participant between instructions. Our qualitative and quantitative analyses show that people in the social gaze condition are significantly more quick to engage the robot, smile significantly more often, and can better account for where the robot is looking. In addition, we find people in the social gaze condition to feel more responsible for the task performance. We conclude that social gaze in assembly scenarios fulfills floor management functions and provides an indicator for the robot’s affordance, yet that it does not influence likability, mutual interest and suspected competence of the robot.


robot and human interactive communication | 2016

A multi-view hand gesture RGB-D dataset for human-robot interaction scenarios

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Understanding semantic meaning from hand gestures is a challenging but essential task in human-robot interaction scenarios. In this paper we present a baseline evaluation of the Innsbruck Multi-View Hand Gesture (IMHG) dataset [1] recorded with two RGB-D cameras (Kinect). As a baseline, we adopt a probabilistic appearance-based framework [2] to detect a hand gesture and estimate its pose using two cameras. The dataset consists of two types of deictic gestures with the ground truth location of the target, two symbolic gestures, two manipulative gestures, and two interactional gestures. We discuss the effect of parallax due to the offset between head and hand while performing deictic gestures. Furthermore, we evaluate the proposed framework to estimate the potential referents on the Innsbruck Pointing at Objects (IPO) dataset [2].


international conference on computer vision systems | 2015

General Object Tip Detection and Pose Estimation for Robot Manipulation

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Robot manipulation tasks like inserting screws and pegs into a hole or automatic screwing require precise tip pose estimation. We propose a novel method to detect and estimate the tip of elongated objects. We demonstrate that our method can estimate tip pose to millimeter-level accuracy. We adopt a probabilistic, appearance-based object detection framework to detect pegs and bits for electric screw drivers. Screws are difficult to detect with feature- or appearance-based methods due to their reflective characteristics. To overcome this we propose a novel adaptation of RANSAC with a parallel-line model. Subsequently, we employ image moments to detect the tip and its pose. We show that the proposed method allows a robot to perform object insertion with only two pairs of orthogonal views, without visual servoing.


Frontiers in Neurorobotics | 2018

Learning Semantics of Gestural Instructions for Human-Robot Collaboration

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the humans intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.


ieee international conference on automatic face gesture recognition | 2017

Supervised Learning of Gesture-Action Associations for Human-Robot Collaboration

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

As human-robot collaboration methodologies develop robots need to adapt fast learning methods in domestic scenarios. The paper presents a novel approach to learn associations between the human hand gestures and the robot’s manipulation actions. The role of the robot is to operate as an assistant to the user. In this context we propose a supervised learning framework to explore the gesture-action space for human-robot collaboration scenario. The framework enables the robot to learn the gesture-action associations on the fly while performing the task with the user; an example of zero-shot learning. We discuss the effect of an accurate gesture detection in performing the task. The accuracy of the gesture detection system directly accounts for the amount of effort put by the user and the number of actions performed by the robot.


human robot interaction | 2015

Negotiating Instruction Strategies during Robot Action Demonstration

Lars Christian Jensen; Kerstin Fischer; Dadhichi Shukla; Justus H. Piater


human robot interaction | 2017

It Gets Worse Before it Gets Better: Timing of Instructions in Close Human-Robot Collaboration

Lars Christian Jensen; Kerstin Fischer; Franziska Kirstein; Dadhichi Shukla; Özgur Erkennt; Justus H. Piater


robot and human interactive communication | 2017

Proactive, incremental learning of gesture-action associations for human-robot collaboration

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Collaboration


Dive into the Dadhichi Shukla's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerstin Fischer

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Lars Christian Jensen

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Franziska Kirstein

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oliver Kroemer

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Rudolf Lioutikov

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge