Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vadim Tikhanoff is active.

Publication


Featured researches published by Vadim Tikhanoff.


international conference on artificial neural networks | 2006

Language and cognition integration through modeling field theory: category formation for symbol grounding

Vadim Tikhanoff; José F. Fontanari; Angelo Cangelosi; Leonid I. Perlovsky

Neural Modeling Field Theory is based on the principle of associating lower-level signals (e.g., inputs, bottom-up signals) with higher-level concept-models (e.g. internal representations, categories/concepts, top-down signals) avoiding the combinatorial complexity inherent to such a task. In this paper we present an extension of the Modeling Field Theory neural network for the classification of objects. Simulations show that (i) the system is able to dynamically adapt when an additional feature is introduced during learning, (ii) that this algorithm can be applied to the classification of action patterns in the context of cognitive robotics and (iii) that it is able to classify multi-feature objects from complex stimulus set. The use of Modeling Field Theory for studying the integration of language and cognition in robots is discussed.


IEEE Transactions on Autonomous Mental Development | 2011

Integration of Speech and Action in Humanoid Robots: iCub Simulation Experiments

Vadim Tikhanoff; Angelo Cangelosi; Giorgio Metta

Building intelligent systems with human level competence is the ultimate grand challenge for science and technology in general, and especially for cognitive developmental robotics. This paper proposes a new approach to the design of cognitive skills in a robot able to interact with, and communicate about, the surrounding physical world and manipulate objects in an adaptive manner. The work is based on robotic simulation experiments showing that a humanoid robot (iCub platform) is able to acquire behavioral, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to understand basic instructions, and to adapt its abilities to changes in internal and environmental conditions.


Neural Networks | 2009

2009 Special Issue: Cross-situational learning of object-word mapping using Neural Modeling Fields

José F. Fontanari; Vadim Tikhanoff; Angelo Cangelosi; Roman Ilin; Leonid I. Perlovsky

The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients--batch learning and clutter detection--the NMF mechanism was capable to infer perfectly the correct object-word mapping.


international conference on robotics and automation | 2012

Active object recognition on a humanoid robot

Björn Browatzki; Vadim Tikhanoff; Giorgio Metta; Hh Bülthoff; Christian Wallraven

Interaction with its environment is a key requisite for a humanoid robot. Especially the ability to recognize and manipulate unknown objects is crucial to successfully work in natural environments. Visual object recognition, however, still remains a challenging problem, as three-dimensional objects often give rise to ambiguous, two-dimensional views. Here, we propose a perception-driven, multisensory exploration and recognition scheme to actively resolve ambiguities that emerge at certain viewpoints. We define an efficient method to acquire two-dimensional views in an object-centered task space and sample characteristic views on a view sphere. Information is accumulated during the recognition process and used to select actions expected to be most beneficial in discriminating similar objects. Besides visual information we take into account proprioceptive information to create more reliable hypotheses. Simulation and real-world results clearly demonstrate the efficiency of active, multisensory exploration over passive, visiononly recognition methods.


ieee-ras international conference on humanoid robots | 2013

Exploring affordances and tool use on the iCub

Vadim Tikhanoff; Ugo Pattacini; Lorenzo Natale; Giorgio Metta

One of the recurring challenges in humanoid robotics is the development of learning mechanisms to predict the effects of certain actions on objects. It is paramount to predict the functional properties of an object from “afar”, for example on a table, in a rack or a shelf, which would allow the robot to select beforehand and automatically an appropriate action (or sequence of actions) in order to achieve a particular goal. Such sensory to motor schemas associated to objects, surfaces or other entities in the environment are called affordances [1, 2] and, more recently, they have been formalized computationally under the name of object-action complexes [3] (OACs). This paper describes an approach to the acquisition of affordances and tool use in a humanoid robot combining vision, learning and control. Learning is structured to enable a natural progression of episodes that include objects, tools, and eventually knowledge of the complete task. We finally test the robots behavior in an object retrieval task where it has to choose among a number of possible elongated tools to reach the object of interest which is otherwise out of the workspace.


international joint conference on neural network | 2006

Language Acquisition and Symbol Grounding Transfer with Neural Networks and Cognitive Robots

Angelo Cangelosi; Emmanouil Hourdakis; Vadim Tikhanoff

Neural networks have been proposed as an ideal cognitive modeling methodology to deal with the symbol grounding problem. More recently, such neural network approaches have been incorporated in studies based on cognitive agents and robots. In this paper we present a new model of symbol grounding transfer in cognitive robots. Language learning simulations demonstrate that robots are able to acquire new action concepts via linguistic instructions. This is achieved by autonomously transferring the grounding from directly grounded action names to new higher-order composite actions. The robots neural network controller permits such a grounding transfer. The implications for such a modeling approach in cognitive science and autonomous robotics are discussed.


Neural Networks | 2009

2009 Special Issue: A neural model of selective attention and object segmentation in the visual scene: An approach based on partial synchronization and star-like architecture of connections

Roman Borisyuk; Yakov B. Kazanovich; David Chik; Vadim Tikhanoff; Angelo Cangelosi

A brain-inspired computational system is presented that allows sequential selection and processing of objects from a visual scene. The system is comprised of three modules. The selective attention module is designed as a network of spiking neurons of the Hodgkin-Huxley type with star-like connections between the central unit and peripheral elements. The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed. Such dynamics corresponds to the partial synchronization mode. It is shown that peripheral neurons with higher firing rates are preferentially drawn into partial synchronization. We show that local excitatory connections facilitate synchronization, while local inhibitory connections help distinguishing between two groups of peripheral neurons with similar intrinsic frequencies. The module automatically scans a visual scene and sequentially selects regions of interest for detailed processing and object segmentation. The contour extraction module implements standard image processing algorithms for contour extraction. The module computes raw contours of objects accompanied by noise and some spurious inclusions. At the next stage, the object segmentation module designed as a network of phase oscillators is used for precise determination of object boundaries and noise suppression. This module has a star-like architecture of connections. The segmented object is represented by a group of peripheral oscillators working in the regime of partial synchronization with the central oscillator. The functioning of each module is illustrated by an example of processing of the visual scene taken from a visual stream of a robot camera.


international conference on robotics and automation | 2012

Imitation learning of non-linear point-to-point robot motions using dirichlet processes

Volker Krüger; Vadim Tikhanoff; Lorenzo Natale; Giulio Sandini

In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations. The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach to use an infinite Gaussian mixture model (IGMM) which does not have this limitation. Instead, the IGMM automatically finds the number of mixtures that are necessary to reflect the data complexity. For use in the context of a non-linear dynamic model, we develop a Constrained IGMM (CIGMM). We validate our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human demonstrator.


international conference on robotics and automation | 2014

Three-Finger Precision Grasp on Incomplete 3D Point Clouds

Ilaria Gori; Ugo Pattacini; Vadim Tikhanoff; Giorgio Metta

We present a novel method for three-finger precision grasp and its implementation in a complete grasping tool-chain. We start from binocular vision to recover the partial 3D structure of unknown objects. We then process the incomplete 3D point clouds searching for good triplets according to a function that accounts for both the feasibility and the stability of the solution. In particular, while stability is determined using the classical force-closure approach, feasibility is evaluated according to a new measure that includes information about the possible configuration shapes of the hand as well as the hands inverse kinematics. We finally extensively assess the proposed method using the stereo vision and the kinematics of the iCub robot.


ieee-ras international conference on humanoid robots | 2014

3D stereo estimation and fully automated learning of eye-hand coordination in humanoid robots

Sean Ryan Fanello; Ugo Pattacini; Ilaria Gori; Vadim Tikhanoff; Marco Randazzo; Alessandro Roncone; Francesca Odone; Giorgio Metta

This paper deals with the problem of 3D stereo estimation and eye-hand calibration in humanoid robots. We first show how to implement a complete 3D stereo vision pipeline, enabling online and real-time eye calibration. We then introduce a new formulation for the problem of eye-hand coordination. We developed a fully automated procedure that does not require human supervision. The end-effector of the humanoid robot is automatically detected in the stereo images, providing large amounts of training data for learning the vision-to-kinematics mapping. We report exhaustive experiments using different machine learning techniques; we show that a mixture of linear transformations can achieve the highest accuracy in the shortest amount of time, while guaranteeing real-time performance. We demonstrate the application of the proposed system in two typical robotic scenarios: (1) object grasping and tool use; (2) 3D scene reconstruction. The platform of choice is the iCub humanoid robot.

Collaboration


Dive into the Vadim Tikhanoff's collaboration.

Top Co-Authors

Avatar

Giorgio Metta

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Natale

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Ugo Pattacini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Francesco Nori

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tanis Mar

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Francesca Stramandinoli

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Giulio Sandini

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge