Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Stoytchev is active.

Publication


Featured researches published by Alexander Stoytchev.


international conference on robotics and automation | 2005

Behavior-Grounded Representation of Tool Affordances

Alexander Stoytchev

This paper introduces a novel approach to representing and learning tool affordances by a robot. The tool representation described here uses a behavior-based approach to ground the tool affordances in the behavioral repertoire of the robot. The representation is learned during a behavioral babbling stage in which the robot randomly chooses different exploratory behaviors, applies them to the tool, and observes their effects on environmental objects. The paper shows how the autonomously learned affordance representation can be used to solve tool-using tasks by dynamically sequencing the exploratory behaviors based on their expected outcomes. The quality of the learned representation was tested on extension-of-reach tool-using tasks.


IEEE Transactions on Autonomous Mental Development | 2009

Some Basic Principles of Developmental Robotics

Alexander Stoytchev

This paper formulates five basic principles of developmental robotics. These principles are formulated based on some of the recurring themes in the developmental learning literature and in the authors own research. The five principles follow logically from the verification principle (postulated by Richard Sutton) which is assumed to be self-evident. This paper also gives an example of how these principles can be applied to the problem of autonomous tool use in robots.


IEEE Transactions on Robotics | 2011

Vibrotactile Recognition and Categorization of Surfaces by a Humanoid Robot

Jivko Sinapov; Vladimir Sukhoy; Ritika Sahai; Alexander Stoytchev

This paper proposes a method for interactive surface recognition and surface categorization by a humanoid robot using a vibrotactile sensory modality. The robot was equipped with an artificial fingernail that had a built-in three-axis accelerometer. The robot interacted with 20 different surfaces by performing five different exploratory scratching behaviors on them. Surface-recognition models were learned by coupling frequency-domain analysis of the vibrations detected by the accelerometer with machine learning algorithms, such as support vector machine (SVM) and k-nearest neighbors (k -NN). The results show that by applying several different scratching behaviors on a test surface, the robot can recognize surfaces better than with any single behavior alone. The robot was also able to estimate a measure of similarity between any two surfaces, which was used to construct a grounded hierarchical surface categorization.


IEEE Transactions on Autonomous Mental Development | 2012

A Behavior-Grounded Approach to Forming Object Categories: Separating Containers From Noncontainers

Shane Griffith; Jivko Sinapov; Vladimir Sukhoy; Alexander Stoytchev

This paper introduces a framework that allows a robot to form a single behavior-grounded object categorization after it uses multiple exploratory behaviors to interact with objects and multiple sensory modalities to detect the outcomes that each behavior produces. Our robot observed acoustic and visual outcomes from six different exploratory behaviors performed on 20 objects (containers and noncontainers). Its task was to learn 12 different object categorizations (one for each behavior-modality combination), and then to unify these categorizations into a single one. In the end, the object categorization acquired by the robot matched closely the object labels provided by a human. In addition, the robot acquired a visual model of containers and noncontainers based on its unified categorization, which it used to label correctly 29 out of 30 novel objects.


international conference on robotics and automation | 2009

Interactive learning of the acoustic properties of household objects

Jivko Sinapov; Mark Wiemer; Alexander Stoytchev

Human beings can perceive object properties such as size, weight, and material type based solely on the sounds that the objects make when an action is performed on them. In order to be successful, the household robots of the near future must also be capable of learning and reasoning about the acoustic properties of everyday objects. Such an ability would allow a robot to detect and classify various interactions with objects that occur outside of the robots field of view. This paper presents a framework that allows a robot to infer the object and the type of behavioral interaction performed with it from the sounds generated by the object during the interaction. The framework is evaluated on a 7-d.o.f. Barrett WAM robot which performs grasping, shaking, dropping, pushing and tapping behaviors on 36 different household objects. The results show that the robot can learn models that can be used to recognize objects (and behaviors performed on objects) from the sounds generated during the interaction. In addition, the robot can use the learned models to estimate the similarity between two objects in terms of their acoustic properties.


Proceedings of the 2006 international conference on Towards affordance-based robot control | 2006

Learning the affordances of tools using a behavior-grounded approach

Alexander Stoytchev

This paper introduces a behavior-grounded approach to representing and learning the affordances of tools by a robot. The affordance representation is learned during a behavioral babbling stage in which the robot randomly chooses different exploratory behaviors, applies them to the tool, and observes their effects on environmental objects. As a result of this exploratory procedure, the tool representation is grounded in the behavioral and perceptual repertoire of the robot. Furthermore, the representation is autonomously testable and verifiable by the robot as it is expressed in concrete terms (i.e., behaviors) that are directly available to the robots controller. The tool representation described here can also be used to solve tool-using tasks by dynamically sequencing the exploratory behaviors which were used to explore the tool based on their expected outcomes. The quality of the learned representation was tested on extension-of-reach tasks with rigid tools.


The International Journal of Robotics Research | 2011

Interactive object recognition using proprioceptive and auditory feedback

Jivko Sinapov; Taylor Bergquist; Connor Schenck; Ugonna Ohiri; Shane Griffith; Alexander Stoytchev

In this paper we propose a method for interactive recognition of household objects using proprioceptive and auditory feedback. In our experiments, the robot observed the changes in its proprioceptive and auditory sensory streams while performing five exploratory behaviors (lift, shake, drop, crush, and push) on 50 common household objects (e.g. bottles, cups, balls, toys, etc.). The robot was tasked with recognizing the objects it was manipulating by feeling them and listening to the sounds that they make without using any visual information. The results show that both proprioception and audio, coupled with exploratory behaviors, can be used successfully for object recognition. Furthermore, the robot was able to integrate feedback from the two modalities, to achieve even better recognition accuracy. Finally, the results show that the robot can boost its recognition rate even further by applying multiple different exploratory behaviors on the object.


computational intelligence in robotics and automation | 2001

Combining deliberation, reactivity, and motivation in the context of a behavior-based robot architecture

Alexander Stoytchev; Ronald C. Arkin

Describes a hybrid mobile robot architecture that addresses three main challenges for robots living in human-inhabited environments: how to operate in dynamic and unpredictable environments, how to deal with high-level human commands, and how to be engaging and fun for human users. The architecture combines three components: deliberative planning, reactive control, and motivational drives. It has proven useful for controlling mobile robots in man-made environments. Results are reported for a fax delivery mission in a normal office environment.


Robotics and Autonomous Systems | 2014

Grounding semantic categories in behavioral interactions: Experiments with 100 objects

Jivko Sinapov; Connor Schenck; Kerrick Staley; Vladimir Sukhoy; Alexander Stoytchev

Abstract From an early stage in their development, human infants show a profound drive to explore the objects around them. Research in psychology has shown that this exploration is fundamental for learning the names of objects and object categories. To address this problem in robotics, this paper presents a behavior-grounded approach that enables a robot to recognize the semantic labels of objects using its own behavioral interaction with them. To test this method, our robot interacted with 100 different objects grouped according to 20 different object categories. The robot performed 10 different behaviors on them, while using three sensory modalities (vision, proprioception and audio) to detect any perceptual changes. The results show that the robot was able to use multiple sensorimotor contexts in order to recognize a large number of object categories. Furthermore, the category recognition model presented in this paper was able to identify sensorimotor contexts that can be used to detect specific categories. Most importantly, the robot’s model was able to reduce exploration time by half by dynamically selecting which exploratory behavior should be applied next when classifying a novel object.


international conference on robotics and automation | 2011

Object category recognition by a humanoid robot using behavior-grounded relational learning

Jivko Sinapov; Alexander Stoytchev

The ability to form and recognize object categories is fundamental to human intelligence. This paper proposes a behavior-grounded relational classification model that allows a robot to recognize the categories of household objects. In the proposed approach, the robot initially explores the objects by applying five exploratory behaviors (lift, shake, drop, crush and push) on them while recording the proprioceptive and auditory sensory feedback produced by each interaction. The sensorimotor data is used to estimate multiple measures of similarity between the objects, each corresponding to a specific coupling between an exploratory behavior and a sensory modality. A graph-based recognition model is trained by extracting features from the estimated similarity relations, allowing the robot to recognize the category memberships of a novel object based on the objects similarity to the set of familiar objects. The framework was evaluated on an upper-torso humanoid robot with two large sets of household objects. The results show that the robots model is able to recognize complex object categories (e.g., metal objects, empty bottles, etc.) significantly better than chance.

Collaboration


Dive into the Alexander Stoytchev's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald C. Arkin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liping Wu

Iowa State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Martinson

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge