Erhan Oztop
Özyeğin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erhan Oztop.
Biological Cybernetics | 2002
Erhan Oztop; Michael A. Arbib
Abstract. Mirror neurons within a monkeys premotor area F5 fire not only when the monkey performs a certain class of actions but also when the monkey observes another monkey (or the experimenter) perform a similar action. It has thus been argued that these neurons are crucial for understanding of actions by others. We offer the hand-state hypothesis as a new explanation of the evolution of this capability: the basic functionality of the F5 mirror system is to elaborate the appropriate feedback – what we call the hand state– for opposition-space based control of manual grasping of an object. Given this functionality, the social role of the F5 mirror system in understanding the actions of others may be seen as an exaptation gained by generalizing from ones own hand to an others hand. In other words, mirror neurons first evolved to augment the “canonical” F5 neurons (active during self-movement based on observation of an object) by providing visual feedback on “hand state,” relating the shape of the hand to the shape of the object. We then introduce the MNS1 (mirror neuron system 1) model of F5 and related brain regions. The existing Fagg–Arbib–Rizzolatti–Sakata model represents circuitry for visually guided grasping of objects, linking the anterior intraparietal area (AIP) with F5 canonical neurons. The MNS1 model extends the AIP visual pathway by also modeling pathways, directed toward F5 mirror neurons, which match arm–hand trajectories to the affordances and location of a potential target object. We present the basic schemas for the MNS1 model, then aggregate them into three “grand schemas”– visual analysis of hand state, reach and grasp, and the core mirror circuit – for each of which we present a useful implementation (a non-neural visual processing system, a multijoint 3-D kinematics simulator, and a learning neural network, respectively). With this implementation we show how the mirror system may learnto recognize actions already in the repertoire of the F5 canonical neurons. We show that the connectivity pattern of mirror neuron circuitry can be established through training, and that the resultant network can exhibit a range of novel, physiologically interesting behaviors during the process of action recognition. We train the system on the basis of final grasp but then observe the whole time course of mirror neuron activity, yielding predictions for neurophysiological experiments under conditions of spatial perturbation, altered kinematics, and ambiguous grasp execution which highlight the importance of the timingof mirror neuron activity.
Experimental Brain Research | 2004
Erhan Oztop; Nina S. Bradley; Michael A. Arbib
This paper presents ILGM (the Infant Learning to Grasp Model), the first computational model of infant grasp learning that is constrained by the infant motor development literature. By grasp learning we mean learning how to make motor plans in response to sensory stimuli such that open-loop execution of the plan leads to a successful grasp. The open-loop assumption is justified by the behavioral evidence that early grasping is based on open-loop control rather than on-line visual feedback. Key elements of the infancy period, namely elementary motor schemas, the exploratory nature of infant motor interaction, and inherent motor variability are captured in the model. In particular we show, through computational modeling, how an existing behavior (reaching) yields a more complex behavior (grasping) through interactive goal-directed trial and error learning. Our study focuses on how the infant learns to generate grasps that match the affordances presented by objects in the environment. ILGM was designed to learn execution parameters for controlling the hand movement as well as for modulating the reach to provide a successful grasp matching the target object affordance. Moreover, ILGM produces testable predictions regarding infant motor learning processes and poses new questions to experimentalists.
International Journal of Humanoid Robotics | 2005
Erhan Oztop; David W. Franklin; Thierry Chaminade; Gordon Cheng
As humanoid robots become more commonplace in our society, it is important to understand the relation between humans and humanoid robots. In human face-to-face interaction, the observation of another individual performing an action facilitates the execution of a similar action, and interferes with the execution of differmi action. This phenomenon has been explained by the existence of shared internal representations for the execution and perception of actions, which would be automatically activated by the perception of another individual?s action. In one interference experiment. null interference was reported when subjects observed a robotic ann perform the incongruent task, suggesting that this effect may be specific to interacting with other humans. This experimental paradigm, designed to investigate motor interference in human interactions, was adapted to investigate how similar the implicit perception of a humanoid robot is to a human agent. Subjects performed rhythmic movements while ohsening either a human agent or humanoid robot performing either congruent or incongruent movements. The variance of the executed movements was used as a measure of the amount of interference in the movements. Both the human and humanoid agents produced significant interference effect. These results suggest that observing the action of humanoid robot and human agent may rely on similar perceptual processes. Furthermore, the ratio of the variance in incongruent to congruent conditions varied between the human agent and humanoid robot. We speculate this ratio describes how the implicit perception of a robot is similar to that of a human, so that this paradigm could provide an objective measure of the reaction to different types of robots and be used to guide the design of humanoid robots interacting with humans.
Robotics and Autonomous Systems | 2011
Emre Ugur; Erhan Oztop; Erol Sahin
In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories. After learning, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. We argue that the learning system proposed shares crucial elements with the development of infants of 7-10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration. In addition, we discuss goal emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans.
Autonomous Robots | 2014
Luka Peternel; Tadej Petrič; Erhan Oztop; Jan Babič
We propose an approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person cross-cut saw. The challenge of this experiment is that it requires precise coordination of the robot’s motion and compliance according to the partner’s actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner’s motion.
ieee-ras international conference on humanoid robots | 2006
Thomas Gumpp; Pedram Azad; Kai Welke; Erhan Oztop; Rüdiger Dillmann; Gordon Cheng
Markerless hand tracking of humans can be applied to a broad range of applications, in robotics, animation and natural human-computer interaction. Traditional motion capture and tracking methods involve the usage of devices such as a data glove, or marker points that are fixed and calibrated on the object to perform tracking. Markerless tracking is free from such needs, and therefore allows for more freedom in movement and spontaneous interaction. In this paper, we analyze how a hand tracking system, which reliably tracks arbitrary hand movements can be implemented. We explored a model based approach that uses particle filters for tracking. In this study we also determine the degree to which the inherent parallel properties of particle filter can be exploited to achieve the goal of real-time tracking. We present the effectiveness of the tracking system via the realtime control of a 20 degrees of freedom dexterous robotic hand
Signal Processing | 1999
Erhan Oztop; Adem Yasar Mülayim; Volkan Atalay; Fatos T. Yarman-Vural
Abstract This paper describes a new framework, called repulsive attractive (RA) network for baseline extraction on document images. The RA network is an energy minimizing dynamical system, which interacts with the document text image through the attractive and repulsive forces defined over the network components and the document image. Experimental results indicate that the network can successfully extract the baselines under heavy noise and overlaps between the ascending and descending portions of the characters of adjacent lines. The proposed framework is applicable to a wide range of image processing applications, such as curve fitting, segmentation and thinning.
Brain Research Bulletin | 2008
Thierry Chaminade; Erhan Oztop; Gordon Cheng; Mitsuo Kawato
Being at the crux of human cognition and behaviour, imitation has become the target of investigations ranging from experimental psychology and neurophysiology to computational sciences and robotics. It is often assumed that the imitation is innate, but it has more recently been argued, both theoretically and experimentally, that basic forms of imitation could emerge as a result of self-observation. Here, we tested this proposal on a realistic experimental platform, comprising an associative network linking a 16 degrees of freedom robotic hand and a simple visual system. We report that this minimal visuomotor association is sufficient to bootstrap basic imitation. Our results indicate that crucial features of human imitation, such as generalization to new actions, may emerge from a connectionist associative network. Therefore, we suggest that a behaviour as complex as imitation could be, at the neuronal level, founded on basic mechanisms of associative learning, a notion supported by a recent proposal on the developmental origin of mirror neurons. Our approach can be applied to the development of realistic cognitive architectures for humanoid robots as well as to shed new light on the cognitive processes at play in early human cognitive development.
ieee-ras international conference on humanoid robots | 2006
Erhan Oztop; Li-Heng Lin; Mitsuo Kawato; Gordon Cheng
We propose a framework for skills transfer to robots, exploiting the plasticity of the human brain in representing its body parts - the body schema. The conceptual idea is, in the first stage, to incorporate the target robotic platform into the experimenters neural representation of his/her own body. In the second stage, the dexterity on a task exhibited with the new external limb -the robot- then can be used for imitation or designing controllers for the task under consideration. In this study, following the steps outlined, we show how dexterous skill transfer can be achieved on a 16-DOF robotic hand, justifying the effectiveness of the proposed method and confirming the flexibility of the human brain in representing the body schema
IEEE Transactions on Autonomous Mental Development | 2015
Emre Ugur; Yukie Nagai; Erol Sahin; Erhan Oztop
Inspired by infant development, we propose a three staged developmental framework for an anthropomorphic robot manipulator. In the first stage, the robot is initialized with a basic reach-and- enclose-on-contact movement capability, and discovers a set of behavior primitives by exploring its movement parameter space. In the next stage, the robot exercises the discovered behaviors on different objects, and learns the caused effects; effectively building a library of affordances and associated predictors. Finally, in the third stage, the learned structures and predictors are used to bootstrap complex imitation and action learning with the help of a cooperative tutor. The main contribution of this paper is the realization of an integrated developmental system where the structures emerging from the sensorimotor experience of an interacting real robot are used as the sole building blocks of the subsequent stages that generate increasingly more complex cognitive capabilities. The proposed framework includes a number of common features with infant sensorimotor development. Furthermore, the findings obtained from the self-exploration and motionese guided human-robot interaction experiments allow us to reason about the underlying mechanisms of simple-to-complex sensorimotor skill progression in human infants.