Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vivian M. De La Cruz is active.

Publication


Featured researches published by Vivian M. De La Cruz.


Frontiers in Behavioral Neuroscience | 2014

Making fingers and words count in a cognitive robot.

Vivian M. De La Cruz; Alessandro G. Di Nuovo; Santo Di Nuovo; Angelo Cangelosi

Evidence from developmental as well as neuroscientific studies suggest that finger counting activity plays an important role in the acquisition of numerical skills in children. It has been claimed that this skill helps in building motor-based representations of number that continue to influence number processing well into adulthood, facilitating the emergence of number concepts from sensorimotor experience through a bottom-up process. The act of counting also involves the acquisition and use of a verbal number system of which number words are the basic building blocks. Using a Cognitive Developmental Robotics paradigm we present results of a modeling experiment on whether finger counting and the association of number words (or tags) to fingers, could serve to bootstrap the representation of number in a cognitive robot, enabling it to perform basic numerical operations such as addition. The cognitive architecture of the robot is based on artificial neural networks, which enable the robot to learn both sensorimotor skills (finger counting) and linguistic skills (using number words). The results obtained in our experiments show that learning the number words in sequence along with finger configurations helps the fast building of the initial representation of number in the robot. Number knowledge, is instead, not as efficiently developed when number words are learned out of sequence without finger counting. Furthermore, the internal representations of the finger configurations themselves, developed by the robot as a result of the experiments, sustain the execution of basic arithmetic operations, something consistent with evidence coming from developmental research with children. The model and experiments demonstrate the importance of sensorimotor skill learning in robots for the acquisition of abstract knowledge such as numbers.


international symposium on neural networks | 2012

Mental practice and verbal instructions execution: A cognitive robotics study

Alessandro G. Di Nuovo; Vivian M. De La Cruz; Santo Di Nuovo

Understanding the tight relationship that exists between mental imagery and motor activities (i.e. how images in the mind can influence movements and motor skills) has become a topic of interest and is of particular importance in domains in which improving those skills is crucial for obtaining better performance, such as in sports and rehabilitation. In this paper, using an embodied cognition approach and a cognitive robotics platform, we introduce initial results of an ongoing study that explores the impact linguistic stimuli could have in processes of mental imagery practice and subsequent motor execution and performance. Results are presented to show that the robot used, is able to “imagine” or “mentally” recall and accurately execute movements learned in previous training phases, strictly on the basis of the verbal commands issued. Further tests show that data obtained with “imagination” could be used to simulate “mental training” processes such as those that have been employed with human subjects in sports training, in order to enhance precision in the performance of new tasks, through the association of different verbal commands.


Adaptive Behavior | 2013

Special issue on artificial mental imagery in cognitive systems and robotics

Alessandro G. Di Nuovo; Vivian M. De La Cruz; Davide Marocco

The present special issue of Adaptive Behavior is focused on exploiting the concept of mental imagery and mental simulation as a fundamental cognitive capability, as applied to artificial cognitive systems and robotics. The special issue is motivated by the fact that the processes behind the human ability to create mental images have recently become an object of renewed interest in cognitive science and, in particular, their applications in the field of artificial cognitive systems. With the aim of providing a panorama of the current research activity on the topic, this special issue presents seven selected contributions considered to be representative of the state of the art in the field. In the section that follows, we give a short introduction on recent work on mental imagery in general, and in the field of artificial cognitive systems in particular, in order to help the reader to contextualize the topic. Subsequently, we summarize the new findings that this special issue presents. Mental imagery has long been the subject of research and debate in philosophy, psychology, cognitive science, and more recently, neuroscience (Kosslyn, 1996), but only quite recently a growing amount of evidence from empirical studies has begun to demonstrate the relationship between bodily experiences and mental processes that actively involve body representations. This is also due to the fact that, in the past, philosophical and scientific investigations of the topic primarily focused upon visual mental imagery. Contemporary imagery research has now broadly extended its scope to include every experience that resembles the experience of perceiving from any sensorial modality. The underlying neurocognitive mechanisms involved in mental imagery, however, and the subsequent physical performance, are still far from being fully understood. Understanding the processes behind the human ability to create mental images of events and experiences, remains a critical issue. Recent research, both in experimental as well as practical contexts, suggests that imagined and executed movement planning relies on internal models for action (Hesslow, 2012). These representations are frequently associated with the notion of internal (forward) models and are hypothesized to be an integral part of action planning (Wolpert, 1997; Skoura, Vinter, & Papaxanthis, 2009). Furthermore, Steenbergen, van Nimwegen, and Crajé (2007) suggest that motor imagery may be a necessary prerequisite for motor planning. Jeannerod (2001) studied the role of motor imagery in action planning and proposed the so-called equivalence hypothesis, suggesting that motor simulation and motor control processes are functionally equivalent (Munzert, Lorey, & Zentgraf, 2009; Ramsey, Cumming, Eastough, & Edwards, 2010). Advances in information and communication technologies have made new tools available to scientists interested in artificial cognitive systems and in designing robotic platforms equipped with sophisticated motors and sensors in order to replicate animal or human sensorimotor input/output streams, e.g. the iCub humanoid robot (Metta, Natale, Nori, Sandini, Vernon, Fadiga, et al., 2010). These platforms, despite the tremendous potential applications, still face several challenges in developing complex behaviors (Asada et al., 2009). To this end, increased research efforts are needed to understand the role of mental imagery and its mechanisms in human cognition and how it can be used to enhance motor control in autonomous robots. From a technological point of view, the impact in the field of robotics could be significant. It could lead to the derivation of engineering principles for the development of autonomous systems that are capable of


Cognitive Computation | 2010

First Words Learning: A Cortical Model

Alessio Plebe; Marco Mazzone; Vivian M. De La Cruz

Humans come to recognize an infinite variety of natural and man-made objects in their lifetime and make use of sounds to identify and categorize them. How does this lifelong learning process begin? Many hypotheses have been proposed to explain the learning of first words with some emerging from the particular characteristics observed in child development. One is the peculiar trend in the speed with which words are learned, which have been referred to in the literature as “fast mapping”. We present a neural network model trained in stages that parallel developmental ones and that simulates cortical processes of self-organization during an early crucial stage of first word learning. This is done by taking into account strictly visual and acoustic perceptions only. The results obtained show evidence of the emergence in the artificial maps used in the model, of cortical functions similar to those found in the biological correlates in the brain. Evidence of non-catastrophic fast mapping based on the quantity of objects and labels gradually learned by the model is also found. We interpret these results as meaning that early stages of first word learning may be explained by strictly perceptual learning processes, coupled with cortical processes of self-organization and of fast mapping. Specialized word-learning mechanisms thus need not be invoked, at least not at an early word-learning stage.


joint ieee international conference on development and learning and epigenetic robotics | 2015

A Deep Learning Neural Network for Number Cognition: A bi-cultural study with the iCub

Alessandro G. Di Nuovo; Vivian M. De La Cruz; Angelo Cangelosi

The novel deep learning paradigm offers a highly biologically plausible way to train neural network architectures with many layers, inspired by the hierarchical organization of the human brain. Indeed, deep learning gives a new dimension to research modeling human cognitive behaviors, and provides new opportunities for applications in cognitive robotics. In this paper, we present a novel deep neural network architecture for number cognition by means of finger counting and number words. The architecture is composed of 5 layers and is designed in a way that allows it to learn numbers from one to ten by associating the sensory inputs (motor and auditory) coming from the iCub humanoid robotic platform. The architecture performance is validated and tested in two developmental experiments. In the first experiment, standard backpropagation is compared with a deep learning approach, in which weights and biases are pre-trained by means of a greedy algorithm and then refined with backpropagation. In the second experiment, six bi-cultural number learning conditions are compared to explore the impact of different languages (for number words) and finger counting strategies. The developmental experiments confirm the validity of the model and the increase in efficiency given by the deep learning approach. Results of the bi-cultural study are presented and discussed with respect to the neuro-psychological literature and implications of the results for learning situations are briefly outlined.


2014 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB) | 2014

Grounding fingers, words and numbers in a cognitive developmental robot

Alessandro G. Di Nuovo; Vivian M. De La Cruz; Angelo Cangelosi

The young math learner must make the transition from a concrete number situation, such as that of counting objects (fingers often being the most readily available), to that of using a written symbolic form that stands for the quantities the sets of objects come to represent. This challenging process is often coupled to that of learning a verbal number system that is not always transparent to children. A number of theoretical approaches have been advanced to explain aspects of how this transition takes place in cognitive development. The results obtained with the model presented here, show that a symbol grounding approach can be used to implement aspects of this transition in a cognitive robot. In the current extended version, the model develops finger and word representations, through the use of finger counting and verbal counting strategies, together with the visual representations of learned number symbols, which it uses to perform basic arithmetic operations. In the final training phases, the model is able to do this using only the number symbols as addends. We consider this an example of symbolic grounding, in that through the direct sensory experience with the body (finger counting), a category of linguistic symbol is learned (number words), and both types of representations subsequently serve to ground higher level (numerical) symbols, which are later used exclusively to perform the arithmetic operations.


international symposium on neural networks | 2014

The iCub learns numbers: An embodied cognition study

Alessandro G. Di Nuovo; Vivian M. De La Cruz; Angelo Cangelosi; Santo Di Nuovo

Thanks to recent technological advances and the increasing interest towards the Cognitive Developmental Robotics (CDR) paradigm, many popular platforms for scientific research have been designed in order to resemble the shape of the human body. The motivation behind this strongly humanoid design is the embodied cognition hypothesis, which affirms that all aspects of cognition are shaped by aspects of the body. Thus CDR is based on a synthetic approach that aims to provide new understanding on how human beings develop their higher cognitive functions. Following this paradigm we have developed an artificial model, based on artificial neural networks, to explore finger counting and the association of number words (or tags) to the fingers, as bootstrapping for the representation of numbers in the humanoid robot iCub. In this paper, we detail experiments of our model with the iCub robotic platform. Results of the number learning with propri-oceptive data from the real platform are reported and compared with the ones obtained instead, with the simulated platform. These results support the thesis that learning the number words in sequence, along with finger configurations helps the building of the initial representation of number in the robot. Moreover, the comparison between the real and simulated iCub gives insights on the use of these platforms as a tool for CDR.


Neural Network World | 2011

A BIOLOGICALLY INSPIRED NEURAL MODEL OF VISION-LANGUAGE INTEGRATION

Alessio Plebe; Marco Mazzone; Vivian M. De La Cruz

One crucial step in the construction of the human representation of the world is found at the boundary between two basic stimuli: visual experience and the sounds of language. In the developmental stage when the ability of recog- nizing objects consolidates, and that of segmenting streams of sounds into familiar chunks emerges, the mind gradually grasps the idea that utterances are related to the visible entities of the world. The model presented here is an attempt to repro- duce this process, in its basic form, simulating the visual and auditory pathways, and a portion of the prefrontal cortex putatively responsible for more abstract rep- resentations of object classes. Simulations have been performed with the model, using a set of images of 100 real world objects seen from many difierent viewpoints and waveforms of labels of various classes of objects. Subsequently, categorization processes with and without language are also compared.


Cognitive Aspects of Computational Language Acquisition | 2013

In Learning Nouns and Adjectives Remembering Matters: A Cortical Model

Alessio Plebe; Vivian M. De La Cruz; Marco Mazzone

The approach used and discussed here is one that simulates early lexical acquisition from a neural point of view. We use a hierarchy of artificial cortical maps that builds and develops models of artificial learners that are subsequently trained to recognize objects, their names, and then the adjectives pertaining to their color. Results of the model can explain what has emerged in a series of developmental research studies in early language acquisition, and can account for the different developmental patterns followed by children in acquiring nouns and adjectives, by perceptually driven associational learning processes at the synaptic level.


Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition | 2007

Simulating the acquisition of object names

Alessio Plebe; Vivian M. De La Cruz; Marco Mazzone

Naming requires recognition. Recognition requires the ability to categorize objects and events. Infants under six months of age are capable of making fine-grained discriminations of object boundaries and three-dimensional space. At 8 to 10 months, a childs object categories are sufficiently stable and flexible to be used as the foundation for labeling and referencing actions. What mechanisms in the brain underlie the unfolding of these capacities? In this article, we describe a neural network model which attempts to simulate, in a biologically plausible way, the process by which infants learn how to recognize objects and words through exposure to visual stimuli and vocal sounds.

Collaboration


Dive into the Vivian M. De La Cruz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Davide Marocco

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge