Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emre Ugur is active.

Publication


Featured researches published by Emre Ugur.


Adaptive Behavior | 2007

To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot Control

Erol Şahin; Maya Çakmak; Mehmet R. Doğar; Emre Ugur; Göktürk Üçoluk

The concept of affordances was introduced by J. J. Gibson to eXplain how inherent “values” and “meanings” of things in the environment can be directly perceived and how this information can be linked to the action possibilities offered to the organism by the environment. Although introduced in psychology, the concept influenced studies in other fields ranging from human—computer interaction to autonomous robotics. In this article, we first introduce the concept of affordances as conceived by J. J. Gibson and review the use of the term in different fields, with particular emphasis on its use in autonomous robotics. Then, we summarize four of the major formalization proposals for the affordance term. We point out that there are three, not one, perspectives from which to view affordances and that much of the confusion regarding discussions on the concept has arisen from this. We propose a new formalism for affordances and discuss its implications for autonomous robot control. We report preliminary results obtained with robots and link them with these implications.


Robotics and Autonomous Systems | 2011

Goal emulation and planning in perceptual space using learned affordances

Emre Ugur; Erhan Oztop; Erol Sahin

In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories. After learning, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. We argue that the learning system proposed shares crucial elements with the development of infants of 7-10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration. In addition, we discuss goal emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans.


Adaptive Behavior | 2010

Traversability: A Case Study for Learning and Perceiving Affordances in Robots

Emre Ugur; Erol Şahin

The concept of affordances, introduced in psychology by J. J. Gibson, has recently attracted interest in the development of cognitive systems in autonomous robotics. In earlier work (Sahin, Çakmak, Dogar, Ugur, & Üçoluk), we reviewed the uses of this concept in different fields and proposed a formalism to use affordances at different levels of robot control. In this article, we first review studies in ecological psychology on the learning and perception of traversability in organisms and describe how the existence of traversability was judged to exist. We then describe the implementation of one part of the affordance formalism for the learning and perception of traversability affordances on a mobile robot equipped with range sensing ability. Through experiments inspired by ecological psychology, we show that the robot, by interacting with its environment, can learn to perceive the traversability affordances. Moreover, we claim that three of the main attributes that are commonly associated with affordances, that is, affordances being relative to the environment, providing perceptual economy, and providing general information, are simply consequences of learning from the interactions of the robot with the environment.


international conference on robotics and automation | 2007

The learning and use of traversability affordance using range images on a mobile robot

Emre Ugur; Mehmet Remzi Dogar; Maya Cakmak; Erol Sahin

We are interested in how the concept of affordances can affect our view to autonomous robot control, and how the results obtained from autonomous robotics can be reflected back upon the discussion and studies on the concept of affordances. In this paper, we studied how a mobile robot, equipped with a 3D laser scanner, can learn to perceive the traversability affordance and use it to wander in a room tilled with spheres, cylinders and boxes. The results showed that after learning, the robot can wander around avoiding contact with non-traversable objects (i.e. boxes, upright cylinders, or lying cylinders in certain orientation), but moving over traversable objects (such as spheres, and lying cylinders in a rollable orientation with respect to the robot) rolling them out of its way. We have shown that for each action approximately 1% of the perceptual features were relevant to determine whether it is afforded or not and that these relevant features are positioned in certain regions of the range image. The experiments are conducted both using a physics-based simulator and on a real robot.


international conference on robotics and automation | 2012

A kernel-based approach to direct action perception

Oliver Kroemer; Emre Ugur; Erhan Oztop; Jan Peters

The direct perception of actions allows a robot to predict the afforded actions of observed objects. In this paper, we present a non-parametric approach to representing the affordance-bearing subparts of objects. This representation forms the basis of a kernel function for computing the similarity between different subparts. Using this kernel function, together with motor primitive actions, the robot can learn the required mappings to perform direct action perception. The proposed approach was successfully implemented on a real robot, which could then quickly learn to generalize grasping and pouring actions to novel objects.


IEEE Transactions on Autonomous Mental Development | 2015

Staged Development of Robot Skills: Behavior Formation, Affordance Learning and Imitation with Motionese

Emre Ugur; Yukie Nagai; Erol Sahin; Erhan Oztop

Inspired by infant development, we propose a three staged developmental framework for an anthropomorphic robot manipulator. In the first stage, the robot is initialized with a basic reach-and- enclose-on-contact movement capability, and discovers a set of behavior primitives by exploring its movement parameter space. In the next stage, the robot exercises the discovered behaviors on different objects, and learns the caused effects; effectively building a library of affordances and associated predictors. Finally, in the third stage, the learned structures and predictors are used to bootstrap complex imitation and action learning with the help of a cooperative tutor. The main contribution of this paper is the realization of an integrated developmental system where the structures emerging from the sensorimotor experience of an interacting real robot are used as the sole building blocks of the subsequent stages that generate increasingly more complex cognitive capabilities. The proposed framework includes a number of common features with infant sensorimotor development. Furthermore, the findings obtained from the self-exploration and motionese guided human-robot interaction experiments allow us to reason about the underlying mechanisms of simple-to-complex sensorimotor skill progression in human infants.


international conference on development and learning | 2007

Curiosity-driven learning of traversability affordance on a mobile robot

Emre Ugur; Mehmet Remzi Dogar; Maya Cakmak; Erol Sahin

The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the organism and its environment and has become popular in autonomous robot control. The learning of affordances in autonomous robots, however, typically requires a large set of training data obtained from the interactions of the robot with its environment. Therefore, the learning process is not only time-consuming, and costly but is also risky since some of the interactions may inflict damage on the robot. In this paper, we study the learning of traversability affordance on a mobile robot and investigate how the number of interactions required can be minimized with minimial degradation on the learning process. Specifically, we propose a two step learning process which consists of bootstrapping and curiosity-based learning phases. In the bootstrapping phase, a small set of initial interaction data are used to find the relevant perceptual features for the affordance, and a support vector machine (SVM) classifier is trained. In the curiosity-driven learning phase, a curiosity band around the decision hyperplane of the SVM is used to decide whether a given interaction opportunity is worth exploring or not. Specifically, if the output of the SVM for a given percept lies within curiosity band, indicating that the classifier is not so certain about the hypothesized effect of the interaction, the robot goes ahead with the interaction, and skips if not. Our studies within a physics-based robot simulator show that the robot can achieve better learning with the proposed curiosity-driven learning method for a fixed number of interactions. The results also show that, for optimum performance, there exists a minimum number of initial interactions to be used for bootstrapping. Finally, the trained classifier with the proposed learning method was also successfully tested on the real robot.


international conference on robotics and automation | 2015

Bottom-up learning of object categories, action effects and logical rules: From continuous manipulative exploration to symbolic planning

Emre Ugur; Justus H. Piater

This work aims for bottom-up and autonomous development of symbolic planning operators from continuous interaction experience of a manipulator robot that explores the environment using its action repertoire. Development of the symbolic knowledge is achieved in two stages. In the first stage, the robot explores the environment by executing actions on single objects, forms effect and object categories, and gains the ability to predict the object/effect categories from the visual properties of the objects by learning the nonlinear and complex relations among them. In the next stage, with further interactions that involve stacking actions on pairs of objects, the system learns logical high-level rules that return a stacking-effect category given the categories of the involved objects and the discrete relations between them. Finally, these categories and rules are encoded in Planning Domain Definition Language (PDDL), enabling symbolic planning. We realized our method by learning the categories and rules in a physics-based simulator. The learned symbols and operators are verified by generating and executing non-trivial symbolic plans on the real robot in a tower building task.


IEEE Transactions on Cognitive and Developmental Systems | 2018

Affordances in Psychology, Neuroscience, and Robotics: A Survey

Lorenzo Jamone; Emre Ugur; Angelo Cangelosi; Luciano Fadiga; Alexandre Bernardino; Justus H. Piater; José Santos-Victor

The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics.


Robotica | 2015

Parental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills

Emre Ugur; Yukie Nagai; Hande Çelikkanat; Erhan Oztop

Parental scaffolding is an important mechanism utilized by infants during their development. Infants, for example, pay stronger attention to the features of objects highlighted by parents and learn the way of manipulating an object while being supported by parents. Parents are known to make modifications in infant-directed actions, i.e. use “motionese”. Motionese is characterized by higher range and simplicity of motion, more pauses between motion segments, higher repetitiveness of demonstration, and more frequent social signals to an infant. In this paper, we extend our previously developed affordances framework to enable the robot to benefit from parental scaffolding and motionese. First, we present our results on how parental scaffolding can be used to guide the robot and modify robot’s crude action execution to speed up learning of complex actions such as grasping. For this purpose, we realize the interactive nature of a human caregiver-infant skill transfer scenario on the robot. During reach and grasp attempts, the movement of the robot hand is modified by the human caregiver’s physical interaction to enable successful grasping. Next, we discuss how parental scaffolding can be used in speeding up imitation learning. The system describes how our robot, by using previously learned affordance prediction mechanisms, can go beyond simple goal-level imitation and become a better imitator using infant-directed modifications of parents.

Collaboration


Dive into the Emre Ugur's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erol Sahin

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Hangl

University of Innsbruck

View shared research outputs
Top Co-Authors

Avatar

Erol Şahin

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maya Cakmak

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Mehmet Remzi Dogar

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Jamone

Instituto Superior Técnico

View shared research outputs
Researchain Logo
Decentralizing Knowledge