Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hande Çelikkanat is active.

Publication


Featured researches published by Hande Çelikkanat.


Robotica | 2015

Parental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills

Emre Ugur; Yukie Nagai; Hande Çelikkanat; Erhan Oztop

Parental scaffolding is an important mechanism utilized by infants during their development. Infants, for example, pay stronger attention to the features of objects highlighted by parents and learn the way of manipulating an object while being supported by parents. Parents are known to make modifications in infant-directed actions, i.e. use “motionese”. Motionese is characterized by higher range and simplicity of motion, more pauses between motion segments, higher repetitiveness of demonstration, and more frequent social signals to an infant. In this paper, we extend our previously developed affordances framework to enable the robot to benefit from parental scaffolding and motionese. First, we present our results on how parental scaffolding can be used to guide the robot and modify robot’s crude action execution to speed up learning of complex actions such as grasping. For this purpose, we realize the interactive nature of a human caregiver-infant skill transfer scenario on the robot. During reach and grasp attempts, the movement of the robot hand is modified by the human caregiver’s physical interaction to enable successful grasping. Next, we discuss how parental scaffolding can be used in speeding up imitation learning. The system describes how our robot, by using previously learned affordance prediction mechanisms, can go beyond simple goal-level imitation and become a better imitator using infant-directed modifications of parents.


Archive | 2009

Guiding a Robot Flock via Informed Robots

Hande Çelikkanat; Ali Emre Turgut; Erol Şahin

In this paper, we study how and to what extent a self-organized mobile robot flock can be guided to move in a desired direction by informing some of the individuals within the flock. Specifically, we extend a flocking behavior that was shown to maneuver a swarm of mobile robots as a cohesive group in free space avoiding obstacles in its path. In its original form, this behavior does not have a preferred direction and the flock would wander aimlessly in the environment. In this study, we extend the flocking behavior by “informing” some of the individuals about the desired direction that we wish the swarm to move. The informed robots do not signal that they are “informed” (a.k.a. unacknowledged leadership) and instead guide the rest of the swarm by their tendency to move in the desired direction. Through experimental results obtained from physical and simulated robots we show that the self-organized flocking of a swarm of robots can be effectively guided by a minority of informed robots within the flock. In our study, we use two metrics to measure the accuracy of the flock in following the desired direction, and the ability to stay cohesive meanwhile. Using these metrics, we show that the proposed behavior is scalable with respect to the flock’s size, and that the accuracy of guidance increases with 1) the “stubbornness” of the informed robots to align with the preferred direction, and 2) the ratio of the number of informed robots over the whole flock size.


joint ieee international conference on development and learning and epigenetic robotics | 2014

Learning and using context on a humanoid robot using latent dirichlet allocation

Hande Çelikkanat; Guner Orhan; Nicolas Pugeault; Frank Guerin; Erol Sahin; Sinan Kalkan

In this work, we model context in terms of a set of concepts grounded in a robots sensorimotor interactions with the environment. For this end, we treat context as a latent variable in Latent Dirichlet Allocation, which is widely used in computational linguistics for modeling topics in texts. The flexibility of our approach allows many-to-many relationships between objects and contexts, as well as between scenes and contexts. We use a concept web representation of the perceptions of the robot as a basis for context analysis. The detected contexts of the scene can be used for several cognitive problems. Our results demonstrate that the robot can use learned contexts to improve object recognition and planning.


ieee-ras international conference on humanoid robots | 2011

Learning to grasp with parental scaffolding

Emre Ugur; Hande Çelikkanat; Erol Sahin; Yukie Nagai; Erhan Oztop

Parental scaffolding is an important mechanism utilized by infants during their development. Infants, for example, pay stronger attention to the features of objects highlighted by parents and learn the way of manipulating an object while being supported by parents. In this paper, a robot with the basic ability of reaching for an object, closing fingers and lifting its hand lacks knowledge of which parts of the object affords grasping, and in which hand orientation should the object be grasped. During reach and grasp attempts, the movement of the robot hand is modified by the human caregivers physical interaction to enable successful grasping. The object regions that the robot fingers contact first are detected and stored as potential graspable object regions along with the trajectory of the hand. In the experiments, we showed that although the human caregiver did not directly show the graspable regions, the robot was able to find regions such as handles of the mugs after its action execution was partially guided by the human. Later, this experience was used to find graspable regions of never seen objects. At the end, the robot was able to grasp objects based on the position of the graspable part and stored action execution trajectories.


IEEE Transactions on Autonomous Mental Development | 2015

A Probabilistic Concept Web on a Humanoid Robot

Hande Çelikkanat; Guner Orhan; Sinan Kalkan

It is now widely accepted that concepts and conceptualization are key elements towards achieving cognition on a humanoid robot. An important problem on this path is the grounded representation of individual concepts and the relationships between them. In this article, we propose a probabilistic method based on Markov Random Fields to model a concept web on a humanoid robot where individual concepts and the relations between them are captured. In this web, each individual concept is represented using a prototype-based conceptualization method that we proposed in our earlier work. Relations between concepts are linked to the cooccurrences of concepts in interactions. By conveying input from perception, action, and language, the concept web forms rich, structured, grounded information about objects, their affordances, words, etc. We demonstrate that, given an interaction, a word, or the perceptual information from an object, the corresponding concepts in the web are activated, much the same way as they are in humans. Moreover, we show that the robot can use these activations in its concept web for several tasks to disambiguate its understanding of the scene.


ant colony optimization and swarm intelligence | 2008

Modeling Phase Transition in Self-organized Mobile Robot Flocks

Ali Emre Turgut; Cristián Huepe; Hande Çelikkanat; Fatih Gökçe; Erol Şahin

We implement a self-organized flocking behavior in a group of mobile robots and analyze its transition from an aligned state to an unaligned state. We briefly describe the robot and the simulator platform together with the observed flocking dynamics. By experimenting with robotic and numerical systems, we find that an aligned-to-unaligned phase transition can be observed in both physical and simulated robots as the noise level is increased, and that this transition depends on the characteristics of the heading sensors. We extend the Vectorial Network Model to approximate the robot dynamics and show that it displays an equivalent phase transition. By computing analytically the critical noise value and numerically the steady state solutions of this model, we show that the model matches well the results obtained using detailed physics-based simulations.


IEEE Transactions on Cognitive and Developmental Systems | 2016

Learning Context on a Humanoid Robot using Incremental Latent Dirichlet Allocation

Hande Çelikkanat; Guner Orhan; Nicolas Pugeault; Frank Guerin; Erol Sahin; Sinan Kalkan

In this paper, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field (MRF), inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that: 1) it does not relearn everything from scratch given new interactions (i.e., it is online); and 2) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online, and robust: it is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.


international conference on advanced robotics | 2015

Integrating spatial concepts into a probabilistic concept web

Hande Çelikkanat; Erol Sahin; Sinan Kalkan

In this paper, we study the learning and representation of grounded spatial concepts in a probabilistic concept web that connects them with other noun, adjective, and verb concepts. Specifically, we focus on the prepositional spatial concepts, such as “on”, “below”, “left”, “right”, “in front of” and “behind”. In our prior work (Celikkanat et al., 2015), inspired from the distributed highly-connected conceptual representation in human brain, we proposed using Markov Random Field for modeling a concept web on a humanoid robot. For adequately expressing the unidirectional (i.e., non-symmetric) nature of the spatial propositions, in this work, we propose a extension of the Markov Random Field into a simple hybrid Markov Random Field model, allowing both undirected and directed connections between concepts. We demonstrate that our humanoid robot, iCub, is able to (i) extract meaningful spatial concepts in addition to noun, adjective and verb concepts from a scene using the proposed model, (ii) correct wrong initial predictions using the connectedness of the concept web, and (iii) respond correctly to queries involving spatial concepts, such as ball-left-of-the-cup.


international conference of the ieee engineering in medicine and biology society | 2017

Decoding emotional valence from electroencephalographic rhythmic activity

Hande Çelikkanat; Hiroki Moriya; Takeshi Ogawa; Jukka-Pekka Kauppi; Motoaki Kawanabe; Aapo Hyvärinen

We attempt to decode emotional valence from electroencephalographic rhythmic activity in a naturalistic setting. We employ a data-driven method developed in a previous study, Spectral Linear Discriminant Analysis, to discover the relationships between the classification task and independent neuronal sources, optimally utilizing multiple frequency bands. A detailed investigation of the classifier provides insight into the neuronal sources related with emotional valence, and the individual differences of the subjects in processing emotions. Our findings show: (1) sources whose locations are similar across subjects are consistently involved in emotional responses, with the involvement of parietal sources being especially significant, and (2) even though the locations of the involved neuronal sources are consistent, subjects can display highly varying degrees of valence-related EEG activity in the sources.


signal processing and communications applications conference | 2014

Using slowness principle for feature selection: Relevant feature analysis

Hande Çelikkanat; Sinan Kalkan

We propose a novel relevant feature selection technique which makes use of the slowness principle. The slowness principle holds that physical entities in real life are subject to slow and continuous changes. Therefore, to make sense of the world, highly erratic and fast-changing signals coming to our sensors must be processed in order to extract slow and more meaningful, high-level representations of the world. This principle has been successfully utilized in previous work of Wiskott and Sejnowski, in order to implement a biologically plausible vision architecture, which allows for robust object recognition. In this work, we propose that the same principle can be extended to distinguish relevant features in the classification of a high-dimensional space. We compare our initial results with state-of-the-art ReliefF feature selection method, as well a variant of Principle Component Analysis that has been modified for feature selection. To the best of our knowledge, this is the first application of the slowness principle for the sake of relevant feature selection or classification.

Collaboration


Dive into the Hande Çelikkanat's collaboration.

Top Co-Authors

Avatar

Sinan Kalkan

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Erol Şahin

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Ali Emre Turgut

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Erol Sahin

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Fatih Gökçe

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Guner Orhan

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emre Ugur

University of Innsbruck

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge