Beata J. Grzyb
James I University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Beata J. Grzyb.
IEEE Transactions on Autonomous Mental Development | 2011
Eris Chinellato; Marco Antonelli; Beata J. Grzyb; A.P. del Pobil
Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.
international symposium on neural networks | 2009
Beata J. Grzyb; Eris Chinellato; Grzegorz M. Wojcik; Wieslaw A. Kaminski
The properties of separation ability and computational efficiency of Liquid State Machines depend on the neural model employed and on the connection density in the liquid column. A simple model of part of mammalians visual system consisting of one hypercolumn was examined. Such a system was stimulated by two different input patterns, and the Euclidean distance, as well as the partial and global entropy of the liquid column responses were calculated. Interesting insights could be drawn regarding the properties of different neural models used in the liquid hypercolumn, and on the effect of connection density on the information representation capability of the system.
international symposium on neural networks | 2009
Beata J. Grzyb; Eris Chinellato; Grzegorz M. Wojcik; Wieslaw A. Kaminski
This paper presents an approach to facial expression recognition based on the theory of liquid computing. Up to date, no emotion recognition systems based on spiking neural networks exist, and our work is the first attempt in this direction. We investigated the pattern recognition ability of Liquid State Machines based on various neural models, such as integrate-and-fire, resonate-and-fire, FitzHugh-Nagumo, Morris-Lecal, Hindmarsh-Rose and Izhikevichs models. No single Liquid State Machine provided particularly good results, but a global classifier we defined merging the response of the different models achieved a very satisfactory performance in expression recognition.
systems man and cybernetics | 2012
Eris Chinellato; Beata J. Grzyb; A.P. del Pobil
The aim of this paper is to improve the skills of robotic systems in their interaction with nearby objects. The basic idea is to enhance visual estimation of objects in the world through the merging of different visual estimators of the same stimuli. A neuroscience-inspired model of stereoptic and perspective orientation estimators, merged according to different criteria, is implemented on a robotic setup and tested in different conditions. Experimental results suggest that the integration of multiple monocular and binocular cues can make robot sensory systems more reliable and versatile. The same results, compared with simulations and data from human studies, show that the model is able to reproduce some well-recognized neuropsychological effects.
Industrial Robot-an International Journal | 2009
Beata J. Grzyb; Eris Chinellato; Antonio Morales; Angel P. del Pobil
Purpose – The purpose of this paper is to present a novel multimodal approach to the problem of planning and performing a reliable grasping action on unmodeled objects.Design/methodology/approach – The robotic system is composed of three main components. The first is a conceptual manipulation framework based on grasping primitives. The second component is a visual processing module that uses stereo images and biologically inspired algorithms to accurately estimate pose, size, and shape of an unmodeled target object. A grasp action is planned and executed by the third component of the system, a reactive controller that uses tactile feedback to compensate possible inaccuracies and thus complete the grasp even in difficult or unexpected conditions.Findings – Theoretical analysis and experimental results have shown that the proposed approach to grasping based on the concurrent use of complementary sensory modalities, is very promising and suitable even for changing, dynamic environments.Research limitations/i...
international symposium on neural networks | 2008
Eris Chinellato; Beata J. Grzyb; A.P. del Pobil
Integration of multiple visual cues provides natural systems with superior abilities in dealing with nearby objects. This research is aimed at verifying if robotic systems could also benefit from the merging of different visual cues of the same stimulus. A computational model of stereoscopic and perspective orientation estimators, merged according to different criteria, is implemented on a robotic setup and tested in different conditions. Experimental results suggest that the principle of cue integration can make robot sensory systems more reliable and robust. The same results compared with data from human studies show that the model is able to reproduce some well-known neuropsychological effects.
simulation of adaptive behavior | 2012
Marco Antonelli; Beata J. Grzyb; Vicente Castelló; Angel P. Del Pobil
Reaching a target object requires accurate estimation of the object spatial position and its further transformation into a suitable arm-motor command. In this paper, we propose a framework that provides a robot with a capacity to represent its reachable space in an adaptive way. The location of the target is represented implicitly by both the gaze direction and the angles of arm joints. Two paired neural networks are used to compute the direct and inverse transformations between the arm position and the head position. These networks allow reaching the target either through a ballistic movement or through visually-guided actions. Thanks to the latter skill, the robot can adapt its sensorimotor transformations so as to reflect changes in its body configuration. The proposed framework was implemented on the NAO humanoid robot, and our experimental results provide evidences for its adaptative capabilities.
international work conference on the interplay between natural and artificial computation | 2009
Eris Chinellato; Beata J. Grzyb; Nicoletta Marzocchi; Annalisa Bosco; Patrizia Fattori; Angel P. del Pobil
Data related to the coordination and modulation between visual information, gaze direction and arm reaching movements in primates are analyzed from a computational point of view. The goal of the analysis is to construct a model of the mechanisms that allow humans and other primates to build dynamical representations of their peripersonal space through active interaction with nearby objects. The application of the model to robotic systems will allow artificial agents to improve their skills in their exploration of the nearby space.
IEEE Transactions on Autonomous Mental Development | 2013
Beata J. Grzyb; Linda B. Smith; A.P. del Pobil
Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared nonwalkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Nonwalkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a reward-mediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking.
simulation of adaptive behavior | 2012
Beata J. Grzyb; Angel P. del Pobil; Linda B. Smith
The emergence of upright locomotion in infants has been shown to influence and dramatically reorganize a myriad of behaviors. The change, however, does not always lead to improvements, but can also cause temporal instability and lost of integrity in many seemingly unrelated systems. Our empirical study showed that the onset of walking is also related to a disruption in infants’ perceived reachability. This paper investigates the reorganization of the processes responsible for integration of different visual depth cues at the onset of walking and how such recalibration influences reaching behavior. A reward-mediated learning is employed to mimic the development of absolute distance perception in infants over a short developmental timescale.