Vishwanathan Mohan
Istituto Italiano di Tecnologia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vishwanathan Mohan.
Frontiers in Neurorobotics | 2011
Vishwanathan Mohan; Pietro Morasso
In the last years, optimal control theory (OCT) has emerged as the leading approach for investigating neural control of movement and motor cognition for two complementary research lines: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the “degrees of freedom (DoFs) problem,” the common core of production, observation, reasoning, and learning of “actions.” OCT, directly derived from engineering design techniques of control systems quantifies task goals as “cost functions” and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative “softer” approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that “animates” the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints “at runtime,” hence solving the “DoFs problem” without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of “potential actions.” In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
Autonomous Robots | 2011
Vishwanathan Mohan; Pietro Morasso; Jacopo Zenzeri; Giorgio Metta; V. Srinivasa Chakravarthy; Giulio Sandini
The core cognitive ability to perceive and synthesize ‘shapes’ underlies all our basic interactions with the world, be it shaping one’s fingers to grasp a ball or shaping one’s body while imitating a dance. In this article, we describe our attempts to understand this multifaceted problem by creating a primitive shape perception/synthesis system for the baby humanoid iCub. We specifically deal with the scenario of iCub gradually learning to draw or scribble shapes of gradually increasing complexity, after observing a demonstration by a teacher, by using a series of self evaluations of its performance. Learning to imitate a demonstrated human movement (specifically, visually observed end-effector trajectories of a teacher) can be considered as a special case of the proposed computational machinery. This architecture is based on a loop of transformations that express the embodiment of the mechanism but, at the same time, are characterized by scale invariance and motor equivalence. The following transformations are integrated in the loop: (a) Characterizing in a compact, abstract way the ‘shape’ of a demonstrated trajectory using a finite set of critical points, derived using catastrophe theory: Abstract Visual Program (AVP); (b) Transforming the AVP into a Concrete Motor Goal (CMG) in iCub’s egocentric space; (c) Learning to synthesize a continuous virtual trajectory similar to the demonstrated shape using the discrete set of critical points defined in CMG; (d) Using the virtual trajectory as an attractor for iCub’s internal body model, implemented by the Passive Motion Paradigm which includes a forward and an inverse motor model; (e) Forming an Abstract Motor Program (AMP) by deriving the ‘shape’ of the self generated movement (forward model output) using the same technique employed for creating the AVP; (f) Comparing the AVP and AMP in order to generate an internal performance score and hence closing the learning loop. The resulting computational framework further combines three crucial streams of learning: (1) motor babbling (self exploration), (2) imitative action learning (social interaction) and (3) mental simulation, to give rise to sensorimotor knowledge that is endowed with seamless compositionality, generalization capability and body-effectors/task independence. The robustness of the computational architecture is demonstrated by means of several experimental trials of gradually increasing complexity using a state of the art humanoid platform.
Biological Cybernetics | 2010
Pietro Morasso; Maura Casadio; Vishwanathan Mohan; Jacopo Zenzeri
The present study proposes a computational model for the formation of whole body reaching synergy, i.e., coordinated movements of lower and upper limbs, characterized by a focal component (the hand must reach a target) and a postural component (the center of mass must remain inside the support base). The model is based on an extension of the equilibrium point hypothesis that has been called Passive Motion Paradigm (PMP), modified in order to achieve terminal attractor features and allow the integration of multiple constraints. The model is a network with terminal attractor dynamics. By simulating it in various conditions it was possible to show that it exhibits many of the spatio-temporal features found in experimental data. In particular, the motion of the center of mass appears to be synchronized with the motion of the hand and with proportional amplitude. Moreover, the joint rotation patterns can be accounted for by a single functional degree of freedom, as shown by principal component analysis. It is also suggested that recent findings in motor imagery support the idea that the PMP network may represent the motor cognitive part of synergy formation, uncontaminated by the effect of execution.
Frontiers in Human Neuroscience | 2015
Pietro Morasso; Maura Casadio; Vishwanathan Mohan; Francesco Rea; Jacopo Zenzeri
The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory–motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.
Cognitive Computation | 2013
Vishwanathan Mohan; Pietro Morasso; Giulio Sandini; Stathis Kasderidis
In Professor Taylor’s own words, the most striking feature of any cognitive system is its ability to “learn and reason” cumulatively throughout its lifetime, the structure of its inferences both emerging and constrained by the structure of its bodily experiences. Understanding the computational/neural basis of embodied intelligence by reenacting the “developmental learning” process in cognitive robots and in turn endowing them with primitive capabilities to learn, reason and survive in “unstructured” environments (domestic and industrial) is the vision of the EU-funded DARWIN project, one of the last adventures Prof. Taylor embarked upon. This journey is about a year old at present, and our article describes the first developments in relation to the learning and reasoning capabilities of DARWIN robots. The novelty in the computational architecture stems from the incorporation of recent ideas firstly from the field of “connectomics” that attempts to explore the large-scale organization of the cerebral cortex and secondly from recent functional imaging and behavioral studies in support of the embodied simulation hypothesis. We show through the resulting behaviors’ of the robot that from a computational viewpoint, the former biological inspiration plays a central role in facilitating “functional segregation and global integration,” thus endowing the cognitive architecture with “small-world” properties. The latter on the other hand promotes the incessant interleaving of “top-down” and “bottom-up” information flows (that share computational/neural substrates) hence allowing learning and reasoning to “cumulatively” drive each other. How the robot learns about “objects” and simulates perception, learns about “action” and simulates action (in this case learning to “push” that follows pointing, reaching, grasping behaviors’) are used to illustrate central ideas. Finally, an example of how simulation of perception and action lead the robot to reason about how its world can change such that it becomes little bit more conducive toward realization of its internal goal (an assembly task) is used to describe how “object,” “action,” and “body” meet in the Darwin architecture and how inference emerges through embodied simulation.
Neural Computation | 2014
Vishwanathan Mohan; Giulio Sandini; Pietro Morasso
Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one’s past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the “now,” multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.
international conference on artificial neural networks | 2006
Vishwanathan Mohan; Pietro Morasso
Before making a movement aimed at achieving a task, human beings either run a mental process that attempts to find a feasible course of action (at the same time, it must be compatible with a number of internal and external constraints and near-optimal according to some criterion) or select it from a repertoire of previously learned actions, according to the parameters of the task. If neither reasoning process succeeds, a typical backup strategy is to look for a tool that might allow the operator to match all the task constraints. A cognitive robot should support a similar reasoning system. A central element of this architecture is a coupled pair of controllers: FMC (forward motor controller: it maps tentative trajectories in the joint space into the corresponding trajectories of the end-effector variables in the workspace) and IMC (inverse motor controller: it maps desired trajectories of the end-effector into feasible trajectories in the joint space). The proposed FMC/IMC architecture operates with any degree of redundancy and can deal with geometric constraints (range of motion in the joint space, internal and external constraints in the workspace) and effort-related constraints (range of torque of the actuators, etc.). It operates by alternating two basic operations: 1) relaxation in the configuration space (for reaching a target pose); 2) relaxation in the null space of the kinematic transformation (for producing the required interaction force). The failure of either relaxation can trigger a higher level of reasoning. For both elements of the architecture we propose a closed-form solution and a solution based on ANNs.
Archive | 2011
Vishwanathan Mohan; Pietro Morasso; Giorgio Metta; Stathis Kasderidis
Natural/Artificial systems that are capable of utilizing thoughts at the service of their actions are gifted with the profound opportunity to mentally manipulate the causal structure of their physical interactions with the environment. A cognitive robot can in this way virtually reason about how an unstructured world should “change,” such that it becomes a little bit more conducive towards realization of its internal goals. In this article, we describe the various internal models for real/mental action generation developed in the GNOSYS Cognitive architecture and demonstrate how their coupled interactions can endow the GNOSYS robot with a preliminary ability to virtually manipulate neural activity in its mental space in order to initiate flexible goal-directed behavior in its physical space. Making things more interesting (and computationally challenging) is the fact that the environment in which the robot seeks to achieve its goals consists of specially crafted “stick and ball” versions of real experimental scenarios from animal reasoning (like tool use in chimps, novel tool construction in Caledonian crows, the classic trap tube paradigm, and their possible combinations). We specifically focus on the progressive creation of the following internal models in the behavioral repertoire of the robot: (a) a passive motion paradigm based forward inverse model for mental simulation/real execution of goal-directed arm (and arm + tool) movements; (b) a spatial mental map of the playground; and (c) an internal model representing the causality of pushing objects and further learning to push intelligently in order to avoid randomly placed traps in the trapping groove. After presenting the computational architecture for the internal models, we demonstrate how the robot can use them to mentally compose a sequence of “Push–Move–Reach” in order to Grasp (an otherwise unreachable) ball in its playground.
conference on biomimetic and biohybrid systems | 2012
Vishwanathan Mohan; Pietro Morasso
From using forks to eat to maneuvering high-tech gadgets of modern times, humans are adept in swiftly learning to use a wide range of tools in their daily lives. The essence of ‘tool use’ lies in our gradual progression from learning to act ‘on’ objects to learning to act ‘with’ objects in ways to counteract limitations of ‘perceptions, actions and movements’ imposed by our bodies. At the same time, to learn both “cumulatively” and “swiftly” a cognitive agent (human or humanoid) must be able to efficiently integrate multiple streams of information that aid to the learning process itself. Most important among them are social interaction (for example, imitating a teacher’s demonstration), physical interaction (or practice) and “recycling” of previously learnt knowledge (experience) in new contexts. This article presents the skill learning architecture being developed for the humanoid iCub that dynamically integrates multiple streams of learning, multiple task specific constraints and incorporates novel principles that we believe are crucial for constructing a growing motor vocabulary in acting/learning robots. A central feature further is our departure from the well known notion of ‘trajectory formation’ and introduction of the idea of ‘shape’ in the domain of movement. The idea is to learn in an abstract fashion, hence allowing both “task independent” knowledge reuse and task specific “compositionality” to coexist. The scenario of how iCub learns to bimanually coordinate a new tool (a toy crane) to pick up otherwise unreachable objects in its workspace (recycling its past experience of learning to draw) is used to both illustrate central ideas and ask further questions.
Topics in Cognitive Science | 2014
Frank Broz; Chrystopher L. Nehaniv; Tony Belpaeme; Ambra Bisio; Kerstin Dautenhahn; Luciano Fadiga; Tomassino Ferrauto; Kerstin Fischer; Frank Förster; Onofrio Gigliotta; Sascha S. Griffiths; Hagen Lehmann; Katrin Solveig Lohan; Caroline Lyon; Davide Marocco; Gianluca Massera; Giorgio Metta; Vishwanathan Mohan; Anthony F. Morse; Stefano Nolfi; Francesco Nori; Martin Peniak; Karola Pitsch; Katharina J. Rohlfing; Gerhard Sagerer; Yo Sato; Joe Saunders; Lars Schillingmann; Alessandra Sciutti; Vadim Tikhanoff
This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about ones own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each others development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agents capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.