Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Özgür Erkent is active.

Publication


Featured researches published by Özgür Erkent.


The International Journal of Robotics Research | 2013

Bubble space and place representation in topological maps

Özgür Erkent; H. Isil Bozma

This paper presents bubble space based representation of “places” (nodes) in topological maps. Bubble space simultaneously provides for detailed (bubble surfaces) and holistic (bubble descriptors) representation of places. It is based on bubble memory where visual feature values and their local S2-metric relations from robot’s viewpoint are simultaneously encoded on a deformable spherical surface. Bubble surfaces extend bubble memory to accommodate varying robot pose and multiple features. They are transformed into bubble descriptors that are rotationally invariant with respect to heading changes while being computable in an incremental manner as each new set of visual observations is made. We use bubble descriptors for place learning and recognition with support vector machines in both indoor and outdoor environments and provide analysis results on recognition, recall and precision rates and time performance including a comparative study with the state-of-the-art descriptors.


digital image computing techniques and applications | 2015

Probabilistic Detection of Pointing Directions for Human-Robot Interaction

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Deictic gestures - pointing at things in human-human collaborative tasks - constitute a pervasive, non-verbal way of communication, used e.g. to direct attention towards objects of interest. In a human-robot interactive scenario, in order to delegate tasks from a human to a robot, one of the key requirements is to recognize and estimate the pose of the pointing gesture. Standard approaches rely on full-body or partial-body postures to detect the pointing direction. We present a probabilistic, appearance-based object detection framework to detect pointing gestures and robustly estimate the pointing direction. Our method estimates the pointing direction without assuming any human kinematic model. We propose a functional model for pointing which incorporates two types of pointing, finger pointing and tool pointing using an object in hand. We evaluate our method on a new dataset with 9 participants pointing at 10 objects.


international conference on robotics and automation | 2015

Long-term topological place learning

Özgür Erkent; H. Isil Bozma

In this work, we consider long-term topological place learning and present an approach that enables the robot to learn in an unsupervised, organized and incremental manner. The knowledge associated with the previously visited places is internally stored in the form of bubble descriptor semantic tree (BDST) using the previously proposed bubble space representation. The BDST is generated and maintained without any external supervision. It organizes the learned knowledge where the terminal nodes are viewed as corresponding to distinct places while its structure encodes their semantic hierarchy. In case the robot is not able to recognize a place with its current BDST, it learns it via updating the BDST incrementally based on the hierarchical single link clustering algorithm SLINK. The proposed approach is evaluated experimentally using combined benchmark datasets from indoor and outdoor settings with recognition rates comparable to those of state-of-the-art approaches while the robot is able to retain efficiently and use the knowledge associated with the learned places.


international conference on robotics and automation | 2012

Place representation in topological maps based on bubble space

Özgür Erkent; H. Isil Bozma

Place representation is a key element in topological maps. This paper presents bubble space - a novel representation for “places” (nodes) in topological maps. The novelties of this model are two-fold: First, a mathematical formalism that defines bubble space is presented. This formalism extends previously proposed bubble memory to accommodate two new variables - varying robot pose and multiple features. Each bubble surface preserves the local S2-metric relations of the incoming sensory data from the robots viewpoint. Secondly, for learning and recognition, bubble surfaces can be transformed into bubble descriptors that are compact and rotationally invariant, while being computable in an incremental manner. The proposed model is evaluated with support vector machine based decision making in two different settings: first with a mobile robot placed in a variety of locations and secondly using benchmark visual data.


european conference on computer vision | 2016

Integration of Probabilistic Pose Estimates from Multiple Views

Özgür Erkent; Dadhichi Shukla; Justus H. Piater

We propose an approach to multi-view object detection and pose estimation that considers combinations of single-view estimates. It can be used with most existing single-view pose estimation systems, and can produce improved results even if the individual pose estimates are incoherent. The method is introduced in the context of an existing, probabilistic, view-based detection and pose estimation method (PAPE), which we here extend to incorporate diverse attributes of the scene. We tested the multiview approach with RGB-D cameras in different environments containing several cluttered test scenes and various textured and textureless objects. The results show that the accuracies of object detection and pose estimation increase significantly over single-view PAPE and over other multiple-view integration methods.


machine vision applications | 2014

RGB-D based place representation in topological maps

Hakan Karaoguz; Özgür Erkent; H. Isil Bozma

With the recent developments in sensor technology including Microsoft Kinect, it has now become much easier to augment visual data with three-dimensional depth information. In this paper, we propose a new approach to RGB-D based topological place representation—building on bubble space. While bubble space representation is in principle transparent to the type and number of sensory inputs employed, practically, this has been only verified with visual data that are acquired either via a two degrees of freedom camera head or an omnidirectional camera. The primary contribution of this paper is of practical nature in this perspective. We show that bubble space representation can easily be used to combine RGB and depth data while affording acceptable recognition performance even with limited field of view sensing and simple features.


Autonomous Robots | 2012

Artificial potential functions based camera movements and visual behaviors in attentive robots

Özgür Erkent; H. Isil Bozma

An attentive robot needs to exhibit a plethora of different visual behaviors including free viewing, detecting visual onsets, search, remaining fixated and tracking depending on the vision task at hand. The robot’s associated camera movements—ranging from saccades to smooth pursuit—direct its optical axis in a manner that is dependent on the current visual behavior. This paper proposes a closed-loop dynamical systems approach to the generation of camera movements based on a family of artificial potential functions. Each movement from the current fixation point to the next is associated with an artificial potential function that encodes saliency and possibly inhibition depending on the visual behavior that the robot is engaged in. The novelty of this approach is that since the nature of resulting motion can vary from being saccadic to smooth pursuit, the full repertoire of visual behaviors all become possible within the same framework. The robot can switch its visual behavior simply by changing the parameters of the constructed artificial potential functions appropriately. Furthermore, automated reflexive changes among the different visual behaviors can be achieved via a simple switching automaton. Experimental results with APES robot serve to show the performance properties of a robot engaged in each different visual behavior.


international conference on social robotics | 2015

The Effects of Social Gaze in Human-Robot Collaborative Assembly

Kerstin Fischer; Lars Christian Jensen; Franziska Kirstein; Sebastian Stabinger; Özgür Erkent; Dadhichi Shukla; Justus H. Piater

In this paper we explore how social gaze in an assembly robot affects how naive users interact with it. In a controlled experimental study, 30 participants instructed an industrial robot to fetch parts needed to assemble a wooden toolbox. Participants either interacted with a robot employing a simple gaze following the movements of its own arm, or with a robot that follows its own movements during tasks, but which also gazes at the participant between instructions. Our qualitative and quantitative analyses show that people in the social gaze condition are significantly more quick to engage the robot, smile significantly more often, and can better account for where the robot is looking. In addition, we find people in the social gaze condition to feel more responsible for the task performance. We conclude that social gaze in assembly scenarios fulfills floor management functions and provides an indicator for the robot’s affordance, yet that it does not influence likability, mutual interest and suspected competence of the robot.


robot and human interactive communication | 2016

A multi-view hand gesture RGB-D dataset for human-robot interaction scenarios

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Understanding semantic meaning from hand gestures is a challenging but essential task in human-robot interaction scenarios. In this paper we present a baseline evaluation of the Innsbruck Multi-View Hand Gesture (IMHG) dataset [1] recorded with two RGB-D cameras (Kinect). As a baseline, we adopt a probabilistic appearance-based framework [2] to detect a hand gesture and estimate its pose using two cameras. The dataset consists of two types of deictic gestures with the ground truth location of the target, two symbolic gestures, two manipulative gestures, and two interactional gestures. We discuss the effect of parallax due to the offset between head and hand while performing deictic gestures. Furthermore, we evaluate the proposed framework to estimate the potential referents on the Innsbruck Pointing at Objects (IPO) dataset [2].


international conference on computer vision systems | 2015

General Object Tip Detection and Pose Estimation for Robot Manipulation

Dadhichi Shukla; Özgür Erkent; Justus H. Piater

Robot manipulation tasks like inserting screws and pegs into a hole or automatic screwing require precise tip pose estimation. We propose a novel method to detect and estimate the tip of elongated objects. We demonstrate that our method can estimate tip pose to millimeter-level accuracy. We adopt a probabilistic, appearance-based object detection framework to detect pegs and bits for electric screw drivers. Screws are difficult to detect with feature- or appearance-based methods due to their reflective characteristics. To overcome this we propose a novel adaptation of RANSAC with a parallel-line model. Subsequently, we employ image moments to detect the tip and its pose. We show that the proposed method allows a robot to perform object insertion with only two pairs of orthogonal views, without visual servoing.

Collaboration


Dive into the Özgür Erkent's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Franziska Kirstein

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Kerstin Fischer

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars Christian Jensen

University of Southern Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge